var/home/core/zuul-output/0000755000175000017500000000000015146173050014527 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015146177066015506 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000236264415146176716020305 0ustar corecoreikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfԅEڤ펯_ˎ6Ϸ7+%f?ᕷox[o8W5NB% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?BqG!(8l K3T[<~6]90}(*T7siv'=k 9Q2@vN ( R[=챓XVy#iEW&q]C/zE'`9ƭZrg9(&e'd,LFlPh ۬rWM qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpʹǣgC^=jt'#=( 9X$=rዌqpMl)QpL F2G rZ5n@Qq9TAQ;mM9>lη˽`!i:ګPSPٔ3@5;ȕ}PkڪH9' |":", 1Ҫ8 %lg&:2JC!Mjܽ#`PJWP4Q2:IGӸۡshN+60#:mufe߿~Y,iǑ wVq*T+ w%fx6 %u̩1hӰc%AYW ZY~a_6_yWf`rVA,f=A}h&V<UKkZ{iqi :íy˧FR1u)X9 f΁U ~5batx|ELU:T'T[G*ݧ ؽZK̡O6rLmȰ (T$ n#b@hpj:˾ojs)M/8`$:) X+ҧSaۥzw}^P1J%+P:Dsƫ%z; +g 0հc0E) 3jƯ?e|miȄ{g6R/wD_tՄ.F+HP'AE; J j"b~䞠BpY]Q4`Iz_*2coT'ƟlQ.Ff!bpRw@\6"yr+i37Z_j*YLfnYJ~Z~okJX ?A?gU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQ?s d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ߐ/ۼ𹫘qݎt7Ym݃|M$ 6.x5 TMXbXj-P\jА޴y$j`ROA"EkuS#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&VY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓٻҦ62L0ډ"ܺ_z9JNȯ=@oUI y4PE[/Y5d{zrBܖ6Hlc "mKv~[uLU4lZ;xEN'oI㤛rP*jC# 6@dmHg1$ʇȠh#CBΤ{sTQ{%w)7@y1K^ ].Y$46[B-3%OONw8d`Q4d$x0t8@t]y1T\YAidtxBG:pɨyeNg4n]M؞ e}Wn6׳i~'ہZ*FU{fXڃP'Hl4 ,ŸqMHDCYZz Qnz܁$Jp04ȴIL΃.0FiO-qy)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-=^YgpUHެbZ!y!ul@ڼ63" ۩:6=TZõ$E,ϓRV|G&$rr;J TtIHFE=RȬ]P pLm|?$%>Eü%mWO[>Xmw,*9.[G n >X8Ī;xW%dT:`ٓ~:QO,}j6j!yڦʲT:Pqҋh] H+&=>g| Z;D8ܶb:! Å{2:+au 6:!fF+0#+̬NY"!6a7#񕪰%:r|o5Znڧs?si/W qEU馥˟^_޶oڷOj'?nc]Rn\t3^邳塨Lɏ"8k8M~?M}OAH$77f|lgn I;.K*!<+"eK5c&`X:#;@B@[(K44sBFu M.MNWLlY]K᜴=/ VމYlϿ4i36$>m|_>9|dUA"{!$jKx E$K3hN(tÊ-#v#O N, 9g80Ǭ&VdӞ5W1!1KYd`,-*&>F~⯰&jb.~cNk BL_OG]Bv.A|'qT(Ol.' 4IE|@Iі)<-p JkQm1 `qacܗVc?)cl*&<}P媠E{-sVU>߇GUt\+n3X]Byoz)li$2cPs6D>TE-n# rve{椱I |p)U݋7yJw&PzDgi xs  xh\L r Ѥo Zt(I >|$>tnMdэo`cZV yBB `MHVuV_K2k*`cKxuBG&24T}Lai 0Va(7K#ӊ!,ZDxFQO*lם>!4ӥ2 ]8â6 U`V%`!c%؎ʨTzrKh! c.}.D>)d_ 8rcu,wf2?Ǡ*_lDn}rauyFp*ɨ:UiM2r:9ct X1lmĪ o玓,R%!`hGT LYF#g<cm${|Xdu4tmtїUJ\~dc0KcMlf2?mμQ ߉J4WrSHTdp"ӹ'cJq2zPlX̯.0H!ND@UapVoGڧD5>H]f@!=߸2V%Z 0"G4ȇʩ@]>Y$ًF_Mm_Tt)ib+q&EXFu򾬳ǝ/RS>r,C2NfOjpcm{Ll9vQOT>9U;])>6JdbXԠ `Z#_+D[7IIjJɟUh ҙ"`"a ߒ"G̾H`6yiCk(OA/$ ^%K^+(Vr[RR1"u4A.1X0=7f/"(o9/L1X{]q`Ȝ/; 9a>E)XOS K9mUxBa"'4T[Jl /K/9,rlCAj_TiǘP,:4F%_0E5IE'rX-|_W8ʐ/=ӹjhO%>| :S Px„*3_y.g9| ;b`w NtZtc> ײ1KĴ{3Gl& KT1ZWX8?C]~We$9; -.D087?1a@P5B,c}jcGȱ WW/ @a#LA4.ٹ^XڋXٝ:^Izq. ٽƎDn6ٹBc5Lt;3#i3RAٽ9| cbpcTfp> 6L/_x 'ۙz7~w~);qU9GDT! 6]c_:VlnEUdn6UˇKU;V`JUݵޙEO[)ܶCy*8¢/[cչjx&? ՃJȚ9!j[~[' "ssTV2i sLq>z@JM->=@NỲ\쀜*/) ̞r21.y? bO]3?C!yw3ޯL_Su>o>&lrw&i"< :]_<<7U_~z5є/rfn͝MLmc 6&)e+n7cyy{_~궼07R7wPuqpqo{ߟ+[w_uOq?u-|?WS_tOq?Eu-L_p?Cz .e ϿO*3 `Ђ6a-`kIf-s,RL-R`1eL~dپ&+IhYRczr?㐟,v~,b6)up)3K,RLW"Qd9JgT\1f3@Kh% a4x,kA k ^d kYj5Ah𚄓vXZhX1xҖ51Y +Id ZZ\C| fD>hB֡#-$+Jpሟ,Cg:6 3 xH "}C[`ӨOAFn5ʬLHϰ:N@VcyBI#Dr. "h hg ۃm-qu>V&൘ G7qi#^tҒ[JI!{q*lrD܇Gk@;oI<5xZ4xM"؇'k!>V|lk'{d+ :sXӄc)?W`*|\v aVT0"tMًcΒVz]T.C$cEp._0M`AlF̤@U' u,—rw=3}resLV&ԙy=Ejl1#XX۾;R;+[$4pjfљ lݍ3)`xvcZRT\%fNV Q)nsX }plMa~;Wi+f{v%Ζ/K 8WPll{f_WJ|8(A ä>nl"jF;/-R9~ {^'##AA:s`uih F% [U۴"qkjXS~+(f?TT)*qy+QR"tJ8۷)'3J1>pnVGITq3J&J0CQ v&P_񾅶X/)T/ϧ+GJzApU]<:Yn\~%&58IS)`0効<9ViCbw!bX%E+o*ƾtNU*v-zߞϢ +4 {e6J697@28MZXc Ub+A_Aܲ'SoO1ۀS`*f'r[8ݝYvjҩJ;}]|Bޙǖߔ 3\ a-`slԵ怕e7ːزoW|A\Qu&'9~ l|`pΕ [Q =r#vQu0 M.1%]vRat'IIc(Irw~Z"+A<sX4*X FVGA<^^7 vq&EwQű:؁6y\QbR9GuB/S5^fa;N(hz)}_vq@nu@$_DVH|08W12e_ʿd{xlzUܝlNDU j>zƖݗ&!jC`@ qэ-V Rt2m%K6dX)"]lj齔{oY:8VmS!:Wh#O0} :OVGL.xllT_oqqqLec2p;Ndck[ Rh6T#0H Q}ppS@ώ@#gƖ8sѹ e^ CZLu+."T#yrHhlكʼE-X'I^=bKߙԘ1"+< gb`[c1髰?(o$[eR6uOœ-m~)-&>883\6y 8V -qrG]~.3jsqY~ sjZ+9[rAJsT=~#02ݬf¸9Xe>sY~ ae9} x* zjC.5Wg󵸊y!1U:pU!ƔCm-7^w]斻~[hW$k sE0ڊSq:+EKٕ|dvvjjy6 æ/ML-yz,ZlQ^oAn-})xǺǍ--qcl:WLg ӁvJ[ǧc~Of+8qpçco#rCtKӫce0!Y-+cxMK-H_2:Uu*corD~@N`#m~R:ߙ歼!IZ5>H;0ޤ:\Tq]_\_>e˲\oUQ\Wߋ47WwߋKpwSSۘF,nC.\UߋoVEuY]^VW0R=<ު˜˻ x}[ێ'|;c^ M7 >5\-> m-8NJ\ALd!>_:h/NAC;?_ξqĎ6xMY(=ͯl~l8V0٨T zL{Ac:&$ ^CpH*DW\r2aR|=(L X1|wrO_g ux1^^V2޲jMi^b``Q#dBxV#NBk1;DAV$"*1]Y~ d-ػ޶dWP `8̬qdM)"^d{~V,9vV1b&D.Vץjur$F Y(ͯ{D U\Y:Dƪ0W~ XmRcUlN9cU\B2K ޑE^ĥ=66!*J)\PIv7Z)V1*QuQVTqVܞ%~cT<o"sGR.أO`J4y ]ojƗ1V |cpWυWI-ODe Ndt9 dU ƧBOBӕ1>g?Jg:ьi^G/i)/q<'Q݌U@tJʰ^蔔^ؿ8u 2t i|yqkme#͗ŕcJeqLÎlN/+1Y*#hcAh2s Zbnڑ¯4iʹc_b܈j= +ZՌ̋>\*pM{8tW2,U`~_CA1 h`>#CtgI^{KKfLEҴ (t?_~oͿYϡ\i$=}Oi8N%c|zRNoml#44Gfq&tM8;A";KVkKU&C-ݡ!9g|xcp>NMttʴtv{W3t`S4Of4(Q [X#PQ1ƀI \o:Txm\)BX&#˵uX}+iqēn2K7]u̟"L$?U衼F3Sc,Ł1B%yr.;zGD2[*uo+Ojf"̀[ޠ^0)weMwP+8P@fA7zjqŴ̂Ác퀡5⢂RkyRe#cLDsˑ-÷/e;,4l{pct#Q`x5y"Yf~DAuNדZ~@(a.$ D27s0Q-! _l=J1XCY: 5mX Px)5y"/@D$90%*{\b ULEV' >9Ab1r%PX2As}e?eK"`N.,*/_nu9kV Ϸ$X*֋led k1Sl SAZz%J.(%$ ۸,EsgHXMYZER-6GOeUH<[⥞s <ɊK9_6՗}8_U'?qdux F0"Y?qUwU3# |z<`@`Kޜ7} bpx8A8%t` wo.k<ْy:a*>)E6'g$0Y]ɓpJƁ6a Y[R{FӀpAnM~t},'I _\tZI0 Bxt# ?ا:7$ {䇏?>"5i\HTӅ  2nŪ]3k.8hNUoAO:' H25?Q9C c\1&%¶Yc+Ԑ];9/1#3]÷/ߋ\Y=Ovz-R^X$(&!kbY_"/YEbpo1'=h#6/eP͐s݀ɷA./MFP&FGq@M KEEط>3a5)%.L\X(\jFPHQMFgmEB~Ѱ46a[oexzGy_8?*Hv"#per8M ]dLļD4k뻓^O4M!抏i#5PPFZ"x4 htf2K kI ytl\ ⊖˸:>2tӹ"N1݁kj7_dQ$(u5ZK.?$+q㣓f׸7<((,s4uEc+gYashY00Cɬi-g8Aqz@)+58@Uu|v Siʆq>muh  昻f>>j_FDMF/2J ~)+I NꮊZVZi ևVޝR3c”5V_bΎf}c?i()ouL"\eVnj>nnmSrtK};<yݖoς4> $@\ ( [['b)\"o(2zM NCRyQ}GA78 hIs3v*)SC"γjYs$ZZμ+ص-%SH:Ж\sI1!i=E*]nvV` Vi8j_e~p]S,7Mɨ*WI"MKW*ê0i2ziD-Ry6d  ̒&;G1p=@ֵl]Eh`Z]C+d.۾Cs,UJb]e`)6Ek\yADмJV<4϶{\W@APֱ",.߿jc;C.>2H$\J0 Ad:@µ-%4dYiumiaH>ڪ }*"2Yj8[b}b=C#` +@ jlt+y_-`{nmm^H(UMbfX3vCaQS\CpM2uU=)%T95&VM/"KAxAS_[ <=~>LO*Eeal.4&YIz(0ޟnۮ%5<|eK !Cy WZa}Σݣ>ѭiXYʞcrBеl&MSxQ$B v"OLx䙑`| \W\gGs*[nJC53w0K]3[E*T$\wP"`Snӹ15ĘCeãCZ&w[HK"MqH9Dq ޓ`,dX'!:]Iwk\{~]0Tz}.F{kS4bLIl?kj&e'֣j޲dՔ D?V=xѻC4n^]j .rUnBG=8`+/RUcWּagauUaJ7uiɆXψv{t9$#_9ڤ%iU54 b[h-yja"fPgJkg?2ù4laq341P|>Y[&ny4vdn$ۦ{(mA 3ҹ0z= vف46w+=0{rGe@ w P?q4f xNX|q fRwUfs滎(1/wÙa`9e<g0+ ̲n7gq`wps~9#*u;g_8g!waPƠ9i0wu@?<{~)kwӁkլ}_N$ڰS2˜C1oaR@sE1p̓ <0F"0Fۙ:eP6% agC`ޜ7[ a03?PpDqJk`˓p˦J[@#ﻎ]CL_i~<"˛eJƕ_AM$<@;qIMNR.lN$RKJv $ɠcf<@n&gSܱPp')P3=/A*OxgtEv KJE;/`C MCOug٩2ӠO2;0u]HAF5uD[-DZvgeO9]J"J^`,=y╺"̒ˇ"s,,gB=ήNy ݮ3ua?E BB=Kw:-WSũL֪ճz|L H*rA ޝ-~XTq'ZgS 'b&*"<:thxtxUN. fmW^X?ONޟ藽&4kyo\; f(pB%Ep >YjR >ܙ];C Aa|u@` #hTDΞn6l:=$8}ONϾ` @ Gf:("z{@NB |GvקK5pvb y115pvx $iYa@v giXa 19>F@TQ,UH# ,0lo} P[2(h]=ɵ,NAU3'i\6{0ai@16;JWN`14}vS0pQŤA_@㛤ivPhO2FZi ;O ݻ&@g4 ޻Ȭ.βF"+V'5֌&+ 뤻̙gd->3N{qv_1Fr=s ji;OpǠhU8; A 4@Mqcp(;=pǠn׆[ &~<ֿ(dYߖRw]__zH4n M;wڋvO\gXڭO{U2 Ltc(ȏ{ݧhyvLT؋2ޫBǠO3yIٹn=¢>;=8smn+nTc*M8 ݗTmڗ5"s΂>$rR[]sK@y*ιR֯mnX ;Sv:MY0kLhjsyq)Ǫg]Wj<0Sexx_P2{yhuqxmxhPQ3gyhPJAx(-ضۗ:)m62im4Jy}} g\?r?R<4taZGFTt&eLme:c.35(dzymYVH$y6pA0F7Od:9[FkJΫUH]D6Wa^ V y.6Q ec JY"5:Qf_y"f660^m|xlE\4/27l4ǂMe&Z5Θp*hQ BhqnΓb-2,$59:$?HPVYfŃx(F"h]-xT CF >][ 00 jvx>ѤEU6kItx|%\dALjP.l]\f }I(A&Oo`#؜jyp#b^G0BvzO@sE.UG+")Jd 0l8C껢hKݧRL `~G@8t YN_+J*+A=ow0GiwA%> 9XLI*Mt`zhhR\K}`7"=^K%Kt:ɇ:= a~

PW6?q>wl8pzc{@0 l'q ^X>GA/_΢3T%ѨOQVMfy?F`1Oj2?IKzK0fȦliɾ,͸̀wi-,0߇ ĸϏ@TT# :V6ukMĸ@†O{g=f0ٸi[mb$ {X"WkGoVq.9:kVeyt#9<~N.gGhb~rySN1Uҙ8^˗b(x8 8^F,]a'|uu1):~9S ؕaT^^d'趉~?P ł.q\epWXh`vz:<r&2V'Р8 }ETS XnE2Kp}Whn\ /$DNI±7&/"k(hU*wSx1j){zEжX$xQjMڸ_~U <%Jw N(}tOB.n )eLJ8ta;4 Cvet:lb@N:#ٓPgBuBuw'};@;$ہPPB F'uB #I(߁``B Fh'넆;Nh0B= KXeA*Kx CAhsQ^DDc[<3Q0HVA*_/$ x@)G 1a'Y/BRBOc h;S7z\F쬵p! ^ tQi& \;[qV0g>-dLi;Wޅ\RNop#挊"Ad.I1xܳ:cJ'EZE_煻K9YUr QfC 6oK9Te9|B% ӁuE"!ߎk1sBDe 1CEF(@nAY"-^%tTQ:1&(_F4p P4j%H)W!?4k`G,0q:E LX`AˈJX@ Vxڤ3+/2èSNWAmb20A}.&Br0w k@Ã/t&l2@FODvԋrzo=Q72J׎D "vEtGU ;ŕbк~4=$!mZ"BkbVab#gڟ'fsPT uGD0j#A`蠆Uݼ o"c]^y(l vT f7FVm6Gt80eǞ˿Tc_#_kªfc8 a8QS#_U@O}eVE[MA>ݎğREE lDܰJaW@쮔2ZZiQJyLA 6y:Mtlaes>"#῟/|ݏSm<\7ǃӇl.=޾闖{/q{}y3_<8^\N>ۏ뻇=|~Η;=czM^ ٍS7G ͭ!?Kq1 &7(qu1G|_ K]ۤ'YT{`joR_x2K/7rNz>70H-DC'Kc*=Kb٩w|ȶ@7"yE{k[}sܘg Tg81NaSH*A3QJx=ӫxo1,)mt-Vg!0D1/ՔCRHC&T5ΔR JlCxjXH%/ߋvER>KTCvc$cX{>֧Ze\`rݼ ٤2$X#$5L:#n{[XZZ nn̂Ŧ"i4a. `G1cY!.bƶj(`$A;Oa =L9X&W`xb)&x?b5݂߇̬q<蒘UM Ԭy< }G_t: M v;yu"Rx%V JiE 0V| ̵WVWw6 4ћlwx4 x |!˝A0Nɣ7SOJ`KGz)$N[yZ }C8:3%$zP-^Ϻ,x a:ޚ>S#9K[t xoy .آjoA?UBc'IӝZ-ܢAe SI 06XD|t16A5bİx]j4UmF*Al,`k$~]| " CQ;97EuDfNy tߐ88ct|QsT؅D ;Vb.UzMm/{6[*K!{J ^l!4Cvw#$RC|$8]2k)CSJݭD8/#(~P%yho*B7n95kp)*&;=7dfQ~|*Jt͖=W77YI~3#h5>&JOWgK~0< $MnI4)Yt I .ꑢ3ם%-/W֐8hgk^P7tVZ JsNk9ґAHz>Ön *U}6.@`斥Wuʍ+(S5 8"IkT< pc~Il{H|wF vLP9.> |kqY-t's{t z~-H< t!b,BWt`B&:Fé7Len< V]6&՝߬ wjyK}˥uOS+U1@j;$ pV\GQ\)ެbZUj~+x*qV|7~ y )[=d1iǛ~֯$ XJ1Kb9NI1荣-yƞl< gwǑd,ҥKԔTLY)* 1qc2tn!JW7$щ RL[k`ʼn|$H:[3 ƿ_b맺"{hYQf}0kF( ǰac7kpaDg"bo/}t$8&_J>GTUu9nj=gniʜZ C\zh1 #K#Jc,V&ÛjѡOVaA,F҉LMo.7n'I/Etic[gCCAEKQvjZ8uVEQLQd}%ћz`IԭK23ưuv0_:}5zHm C rpRpprXzHߗ͖zPM`63p [ӣn$8:;VGhbN)XJ5>|^ĸ41l=}}D[b/^:9gX@w=_G7&1K8h`q"W&fY' fmR6y\| 4Fo0lס"nʏ={C̅XU/ T6;R7z ¬ЄI.M'{Mҵ'MrRg `ҊݿGeDuk`J<3Z٫/S x]of#/ׁU,{iXO'Ye_*֙mW$ΌQ=z|W8Yg TIFս^8xsxQ'T@'ϊqt6jӕL'Io7>ƴx΄=mڬ?w7܏n{D O *aH,MB)k71K"qpaiiQ"mťEq7\_R1(cL>\ =:eENj#$d]GyMj,"&Sm{$84˥@(1SRP%?FsV,Q`*$]y ;Фc?e')'!M-L tHe:Nwt<*3%l^DD'鉤 ,n[).qKs30hp~ęץ1S}t9]PkƅY/=Ԋ%{YVaĈMޥz v}[)ViY%bžŒHR`~pit ^QSxsY5,ΑnZk6+cޞghMD1PZ(P,^5m,oXbӥ$8]{s6*lf};'lIx@yגRRo&EI$" p8/IWOhUjRLXe64S6lhwN ˜^d]-mD|R6%Ww^hzp]nf v;4c/+6-oĵ›zuMO9no@skwAfrfR6Di6Ȋ+4NL6L森|p[ `R}30e#_Gv`$my/|¿Ĥ2u߹b_& ẇ^ AE.:NxmCȔ0*y u%1vv aEO Gh~ÑC;t?ijFH4H&J,\:5`ӫ*&wBP͟ 4P:'*%B9pBϯ =s{#|ᵡë|8 ?k7|p3{]Ats;_]w#.-K:湕&XPoB7 ݯw!eZ`sUFez9<./S@(5(}GoKO{(Uj<c(=sNW0F3qŷO8B_Nr~`OP% ؎"x?w=‚ƻީ=xIsA~1W0d?'<S?ohpBKk'3J@t֫>DuUp̿if!;$xk0S%jWZeNg]Oa?pwL!dk.I vPVWחD%5W/$/(c4260#E샩uYvi[בKJF%rU: @3vDNyb^WW@W' VuRk7t]rDp*Q> 2=@Pr/'8ꃓg\*kcst ̊-?l42㫓b㲍bjDw}hYR &Mz~WfȱSuwVX⸭]5d7/*1GеGFkW"D Yp{Vze"83A7WY:,>?},sXs?_7%}]tȪBlMdot󰯛8.'M'|Lj.h2g`Z9r%mɜIk.2+LY/8(Q1F<33 |4Qƕț86YE5_SBzA1Z- S.n 9w4fmZԘvbE?I~BDJ"mw][xrnj-`Z%nR_!0-+a;q4yE ďs.カE'h7XAp[  I&7on:\phZ*0BHGU-0bj28?MԄ%_^0RDKm0 bR]dnaVh z.rfŸ8>f^;_P·RfjB;hV>I!/=aHJ-s\?J=.^JxN|U/.q.(T+I/SY*e} $•5`Wb| SYjIM)N qA՛e[6@"<{waapijD&"FDP!ӄ&p })TE?\w~HuG?/ n~Nzߓj&Hp# ~]W!/q6W">.y5rz:ը|uz:xO /?0Iv혛1Vx#X/ٹ-Û(ؽbb,@fO?Χ-ԧO#EQ0o^Qˬ0E8D ]dȫEbI֪'ql EF81$f O% Dơw()M; 8HAp7`Ca7%;<+M-)OBWϩiP`H<.UTpRSE URNdc[d>BٴFBٴeG{jX\sF""KIƄacu*/hg8Xp.M;jM;L0WfxW3f;#gRGu99,7j< &Jdj8Ԥp.-gM;j$gM;Ov*djIR;@5OhMjSw#roS; GA=s:2yrx쏇./Â;Q~| m=홢Fޞ/ix^c}; Gi 9A }4K="*pw`*Yۙ/]pBoн%x9'3Jq黳Wy`%.Yо5xm%lpv Ű0Oҷ,@Y ^bԛ\faRՎk*xt쫷7hWL[@ym}1cwQ(c~θ =v٣r;ʊ@}!a]NFG/Ӄ(zމ3I~`N !^=`UQ\(ӭ*h>r#?uk.t| wNw(ʌ \i|n$z@8U4袌26!6a o`g΋4Ou7?k0բ2P5͕4 B5B|jSO2;"i^Z77;2"B=;p.dMGFoGJJ8 }_}&E&*K?_# Jafth4|-w՟檖 Fh=)Zljxb_ oKvy%ȴb"¬K8GH`P ㇲ+Čλ 2ޕz}M^V9 \\9{u87{Nh,r4&)ӎ I,E:4>E,9x<:橷fSQקd2&Zi}ճ]sb9ɞ`Z${rm,µtMœ3FFm $R8ц"P婍AĠXXv*&}R,vva^i4FPD.hhtHiZd$zM*xDܼ V}i, &EV/0S[bN9h*H:-֣ Ay.s^K7s#(5kx6ii Xhi Whiib[ f R1\X7o s bZ)pb-CVdžPZ.5{f X&c((okqlg;(8mAB4f|<>A+Kyk4=rܘiӄ'ҩX'L8v"F晟< `~7eURdR_P%ĥfNV*d7؀/2c Wͦ Tf烳.)?޺n`ݧgwï]}~w~VvOv {> Xڎn$Z@)t:쏆[u/@g&/?e@Gݛ ¯B&?fHpqYC#~.>/};(c_w಻ #v]hπ޺0uz#7`\;0z%2eycbR콸X`y= _^R}6?(xvFxFB-Z _9zu;⹬Ŏy06D)͘EhD( 4_zᨍ)16fxb*I,fDwg"KZ ?)08"vFXhF:a,NA•]$bJF&Wea`]T,cCaeqЪ FVakr+ ¥bHDh޵mlE5y?- n CRCFTQrPʏ-Q-1E9sΙ3owKn\rpY|oJN}p2gBRL`-YT̸PjcD4Sy+pԩ"J (JP~Z#m܇dXN0)KG5 <'s(,j k]ercMUjIUzF>jN-`i9cy+-́`hLrPJDNLڒﶬ|k !xZ~NJ_lNF3@4%Z\.$9!b`fI[/90RܷC8-LL;bRZW9~jMFE#~S`ȁ@Aqp1'ڗd^}~)~k&`]DZ`U3xaX\ i1ϝF 8EjNe&,eyΩ;fUȑkM߽ p' a QGϴ`(.zĴX zô1@5{#Ow#"R( @z{H[|qӿ5dB;ֲʼnuB.pE(_QܡzpF|0S$ ?%,ԀN&CZ\Xw[9!{:[x_)芚[L#T4lӷN3owmWw~ Z*õի}B`R'Yiy,ijbsGR̹3ȿ%= ׶Vtz6)i/`*zAM&Bwzr [TgAfAS"/nh@7wI=EA^p_A1X7Bk%oȴ."08O~9(g&*T*!$[aqKMlZK0L!V R >( ùx8ɔ5HŐN:m3nzJY3-PJҸ?/KMXάT9cyn! VP+2LjC[i p,C#Ra#JK#Cp)fr혣|vSNq*ˑCR댊v"պ/ U٬cIL 7 z+G3ŀh7 qA_70ֱskk麪jSS}?L{U N.0,#D$kDqKA=-»f gFCΛBtkST =ZW<Λ6+{c )C=7huA’&7hŀbҾ$,T0'FBH{VΖF09=$! :& n׽}UT4=P;KdMrC8L*RU!_ cЙ O(d&vhz~V6X t|@2M`Rtk{#j[w"&ZT2>J!XH@HNRJf-5ʶDw5 M&dbI.ͣbqyifA\w͹tSc{st{hʕdu07qqFp X=G+NN~A8zNNʢ^K<钳w !Q'F0EjHۭiv"5A [DԱ ,4:|널b`#SE蕏"T>)(WnO͖ܟ+%GF>c+5;ZóJ؟A|dQSk?ߕʳ9{5͙KȷI..G-OCOh*|f2' \흟. _lf 䇱H l.]Rc{ٴá=́iFadI_O#u'.|6Vy}aK8o\p\;|H- ߕ40T=^j4Ă+jKXd߲O'av]~ ̚ma .PL˳2L׶E6s.``3h{$޲~*G[d͈8k3(I^6M-֏ sZMQ%oN'ZtNN^E0lq:ꕞᡚIEӫtN+ pz$o-9IpZ`l5U7xV$f1u|v@ }/23~ɇQi]&_?rdϚhqvx>ѼYFUd2*B5z"o^D4V`FڳMA(*C~`i7d԰)̘[of`ˠJLe<*J $Ygle?\9{N0IM],b<^B;!=3*[39Q dc7 <8_v%py5s]r2d'QiW -rt- -䂑y|؇Y^%5 Sޛ4ycyȵ|Q6.wGz ?Dùk2kmC{Vۄ K=,[zOX_[֕X֧B>ɥ.'}|97Etz=)SndN6 %23< -cE09F lu h3'3P*ONJ[gS &YjװD '~`6LZ\fy=8Onߏ8 lcQ)4w,^zr|5ܗ(3ܗ6y=VMM),lj)oέ5RZ 2e>exrOlIN  ӯ=_ꝼ^8̀';ψ?sam6s![wu6ɦp CH#)B02F$9s9*ԆE0⌢+D19O"3 -`L5 B #N|r9d:>d$D $s_3 k4Y\WbJPA@Y%]>O< ;=VzKq2Vg>z W(=]\yagkƥhur*5$n?8<Son˕X\YPƋrBW%^_?ܕ,?y&dnYn>ʹbVW/YŲ&{^RlSs{6~6Et T^C Y,wGG8K8'Č"kY)EKׄF ЁTP#JtjHDɡg=BHTHH8n=h*9)ǐ66}ѩmFq o=RFcG;,YtTI q*jþSP fZzHވ“E33fVFÙ}oGr4xX\y&4X>%BX*<Hn2pS2߭|AI1`$*x[V&QQd3 W T\+ 2N&)P\P}̐OT_ x>jJyصje/['DW*4a,\SRJN` Wvd`P>dɸ\o}u1g-+iJC'+)}D8=sx+"EJݿމZAVLL _=}BM sݵRv(Zx~olR";#hl̔ Ix`ݲR?ol~?72_~]ԏt#ה*>ir0 c9Gr4cb6ȟ}o;w ֫nJsS7.'[xU$4sH .TS2h"²@hS=o;j A+Ə=nw׫_cf$%HJ 51&gr L_|*J.x*b05༖`O{)W`OYϧ7yeu[-QZ:HPh8^%'x8ᾇrufT^D?qb:#4n-=ǍI>^/etЙ@Qq1 ={h)OGseЕPdKqBL#A+U~CNуF> u I۲&3%|((X4UKz!}@ri]Ttg4GG+DI_%H(x0}Yհgzxe _YTPr:󍮃CmU%LWh‘ r{)d\GUXaPJ оH &x*Ϭ` CNуF>r9_pclG*ILb;O&A$*O2=k{ȧ{S^oGW‘̛VFtl8BA͖/u=tS|$Z?f""* D|B̒]ʥ=tiS}1c=fF9b(艉?jQ}&3> $J>kN%Vs ܀/*bԨ \CK-|*B[#^]JBݲ3!*.z(iΗ'{i?wwc6~oӿrpӞZL^8 9 lׅyQٖN| KB>#Ҫ)#^+3Ө}(8 5xV?gSISu˝¿ѹj!PxF17.()q9|"F ѸAЕwm rp:肣h|ɗSAL# ~SY|̀P Ɖ匛7q7yDL5n)ƍrm[`_~H7Ŵr8p p{]Y2q½vkbϳBgD܁r z~'Ad uPGSyK;)G3z4ooSf[kwP+vot6ȭ 2 x?"ԏs7;ذm9gLZ8g2E/PX(d!wJ$E{|),u|*vvu惫d4lK'cmf-ӂհ#](WLBU_\7_Ƿ#~yض b01}W#о8|%t\y ,'ιid)gl0 Z VrtK=jz94|8B OG ߘ(` ^ _#,+ڱ{oESg1lt8/}D>r3ŷ]@K] hX\y=p]y!OUOۈ9& DrBw'ŃN`A*JfX'<[L`IS6Mur|经~oauS蜡pI4)K |k17V 9Xw&}D>tN7a7`TV[P-") `p(zxu/7XnJ_@Ve5bc*8/Ajx:>@dzW<+$n#׏(sm8.=;LT/]r7/x(K?8, 3̴ c:i&-W/qGjK8|)$bbħTGHxO]12gd=[H[qn <-|'clR# Hњ-k=!tpALp7PEj"whn.Gmx)WcTrv?scFv ZwWXWo,gJUq}/ O RP4hp3\jx]CoG:gٷ){ԛL  +&>Ӈ,&tU*s 5po9R`>ؑRLKJdd("p>(C'3Afޘ&+9 ,@܀U.TdGO'*>.m[3 鰡7\#)y6`Sq7q3boMuV2m崉h'>>D>r175 27UtyfzsqhJ0օ.(сE-ݼVB<^3?L2|*f/]C xM'WsNtf4tzhftuJ jDFw\~jgKI%ڪ?%?mnBoqTZ}"c a޴zI5r ]}\molZx>w$kZw|jԻwVlGGԌ[VmS AN ۫Ɲ,yS"d2Z)`&dLHSO"RGџQ3n6 3S]s|Pd%! ^1>,&\_M"5ٿKv~v_E }oSm[13V#Əiu8SH/尴qbw9H<`4rBS+EՠD8,M _ٳe(ʐ']rɥ&}}yu'bLm|ms!G>;hDf@jǓKݒ+.FA[} Su6>Eyf-i;-g y[/na @"~i/}yw*! }@j{!b:a8D82ru>wCxSQgtӲا$?}ڠ93*(W]5PpAEDbM&T;A_ȧmOi-;ÿas3ZwXnQ`v:9b%y*wʂ $7<%cRd!AN#djHq]-eOHNwV͙N _ۮu~5jvxq9H}u6W I8Nɋ)_и9tFX5⻥Hf@|=? >l\KT%__,Į Ջ&t4|"yG\=r'hcrM2'B JP PisN!Aяs'B>$ޭ7?v>e߲mg[8|qYK9\'v|I9>>_ ̬KoE:%ksˉz?=9!7Lt}я&34oDC\r.=뽓qx$ϸ5ƴCN~< ȆKogx7 ocwMZg]#rkt~\! MC>-վr'TNbmxl拴urQV|*n9-. ePr1! 2j/էܝC<"O\hN?oeK)U 2TJmj}5**K8/'ISv ?n[bI^T:!'cfuho`dS?ڱ9VCcP/׫5뇰y~Ļ* kq=sYK4nl;ceC7u9WEMH{Z>YϘHM:.@3v ctȲrtBIhK6:c^/"\'r~viߝ3ϩwM]ҏ/7 LʤLl: W [6cXu.$jBZEn,})M]HZ< ΖlXE1bx6WyбS5AͲ>ͻdG_'B.f24T.uZ`73B,ΌkQAHf0f:i+ƇµT|#+'Uoлr{y2gMM@[("sj^\W $*6u'mq½eB:E .GmxC/1 |*ִȜx\_/ ?ruD>f=cOflQ4uʹTEy9\}E tPKoTtjjt()%xnth@kļ@Q:(.tS˄V99}TT56x(< wgMeh<7)U0|*/bn)VPi7LVWԥ _=_ZL*)xZ =[:'g`NoO$~2G_ʻ')Cٸ.󂨨-Mo)NnѶA. L8|p7`*m;J_P!P5]$/(!ɩN2O( ?j@į]TXwߣOOr7>oBG?qz&mxN` nd w\qT^,e)4`VĐ' u{J o\;MqP;oP. ]. rhKx?$. NIɀ\fIg%tuS8;cÝ k\N%D/ ɨcn~(w A$t=:&LF ~/P@vW7=91&q;<}}]4AYܐb4xqD rBDvoŏ⠗~qq!|Dㆅӫvu 襺GDŽxO:A)*\FI"mt`ҵ2kt^G!0F;$~+"CJ=:XR/ĥ[&)pWen?xns1+f/?꘲)9MVΉϗ87`(9U1a%^W3?jC {tLTR =+$_DŽ\rXRb@Dѣct })~sh;\uX9v!Q, !\D&8$lUR^ݎfbRfռ_IGUKN_nCIJmh_lͥ_P=:~eĥOvbnu;2{0/! ウWWi@_cxX.CԲ;@٭~ksۀLn(J>cmN J(KNIgw:t\UkR=y(_cdo-bEI1eD'2eyAtuE2s\L,NjӨ`0g^;Tx./οټsI} zקc.% vqZכ.ȥYl i\_ d/_z|$bLKbA飸aczo蹜glh{Ep_˗fC ߣ㶋c|"4Ёq7x$/k9Q GǍG/_|h{txqTWHǚ[.exkztxqod`NqM vfJ8ql0bXUʑHҢD 崯GMGV+ͫ_^բ$iL]6]wn˔VnʹfT`,6ϫʏm]oۦ>մ3C=F?]KLzt#Ek㢎W }_ufŘz!+}pY5ө*f1l[؇Xac]=ʿs 9ͭkv\yY"-C$HiC*_㈈JJU0բ:-CYT h-GA" E(t2P8=a&0k5~&אWV`/pmh@k+?XrZ!|:PCrO-^?EnջJ]׋޹Є> CdF@[tyN~ gA.M.] :hevA>C-g zϛӡkd)%dK"{vbP2$iӋa#?96''DF9FKQCʫ/l 2bCpsK)ݫr=R&G r^N%wB !x:QP1 b=jq{ɃQв@a&}7X-C"H1FlL[)>0mLɩ.NZz/} "ZOcD@Ab0N"l+G-3of'Ed#xAiH#6۩ Ld49Goh-#;" E(j1xc!5WWOiєS$R\p;tW?h-}x㽬 "ZOc DoLXMH<4#"]B]ӦP,%@Gs]B(#x^㠇_pOG{8` Wqɧds$;{!tCvtK9Ӟ(.'YECto45 D9!<(uA>mˎi`oʄJ&VYl+x@½o[hVրn8@“᪊nuj-3>l.@—rd{l3HT$ۭG|>hlg>{ m݅Tþ Y<(4bSm!< Tq"6aAO5yL,P Zz!DB04VOcxJBPSiS/gi\SQ ʺ3&'2K&y8S=݂i^ 0&nzclA!s%ɶ~KC,4!+E|NJg:m:d0sgIo*Vzy5D@({Wp,OT]lSy}@si \)XN.H43ӬW; \a1tĺq8Kxfb-}ؗ%@h=1-b0R*C\23^‹*d*ALk 20TSq+IÈ ̬廧1ʙbԺvPd枃OL)JAn$%1+ƈkOk-zdR7R+5FY2iLx!1VmVc9Fb `A7Ýj|BCԆH'w!)βж{!:4ư\;>3d"_l*%QIA+ˋb(/Rr6myX.MfNc\")s9 hC|ӟm߿JЄ; ?o14 ,wA ˆԠx '/ d|i-XK4@01-(S"2ج_.dlE|f}U?LxF` H _{|\?7?J[KI}R0􌭧1+ȖCғ{$N|ELQO=c-N=Q5"f*II^vr-Yt0bb[Iyd0܈1 `vts9)Q~{%4zih:i0$}xi2UDd٘z&?6e|"lE/*aOP0rvf#| O=;V;BOH-D|y y#Z7h!41gCrp 88e0_ ANAޗKMcS*۔\g\DKZh-5 7Jsb22#q?J̔Pχc,5+{>tOc \+5!Pkze*' 'kyfqn 3}M Dh=Qn,׌Cf%"Dr|Vm$F `ƨ!רÍT>]c±}<|39|=ZѨF-MJB|$1D[.O [itiqե+T M>AHQ`c^V]twSp?Lh LFMkt9OΨ/EvEkN Dxw$jAPD_ —xmrj(jxxլlU_$<ӑ6D,I [ H jV9aT/hm5'~sU^ R`6Mwx|Yc+4:IO0N-2:UX(HUD&?Wf^jr|of63.d^؎a)5":l]<~aB"*UN$U9I25#!}#A!EZǤ@vo8E-w7" =iq 3e8& vG:k0v(~vWxZ==17vQn^b4zit4rAHx!YuM!Eݷ~eɢ?'\آOa31^ABh^1ѓȴx;i8ˢI+Z^ SEX*2_쏦Phf1A˗ƹz)qs:E y*ґP`9Hewy>aI1ɧ0zïpr(H -p0>^(+EOެ\,PNjЙ{g,9W[n{F*=1}e8G{I}i<_S!)xZ_udf/ɘ  (ӧ5޻k5܌*7*"O~cQ~6OP˻U˸b=o6`Õ%䇶AruށXYvJZy?Xֺ6•FCg 9v^!e}g .ͪ}9˦~7 p1bϚ~о::;, $䧗͛H~3藅פ\}uȺ GW5kMxrњJq3c>g8lKK-,L 6yMߝ[Oi7y 56˼ u>.Eo]ԏ(lTΚȸڕQp^:0:< ܮs폋GYC t߻ݐa[ \f^nnn_aC10c:Zy /Vww١fk\H.nuuQ-ʥb/6evExl5wŽQ,ij\EUkM^kӯ9}jF]T)踃jBT) ԅתB+g/MuQGw\fk+%L[Pyǽ e: f^_^\`"c/`uth^"jytϒ|Xe~FJ56@PϾ0]r^XWv|6+o50I Poo;nL[a_$񉘘4ֈjjx̓XpO/l,N3l Ѓ6@ 3"t3ٺRAFAq;VReYÎ%hQlc'U} >"2GO$TζN]j 4W:kxX5KU Aq&_)YFXQBn.v7(E":oCyb׳*mk.)J%4È-҄!gS;eKBN폤KJ%*Op.ٳ P(Cq$l2? +[GUH} B+~U lutVe~ nmͲ^jwHG,s gzی[S߾>wAO~nw7NS]WQhĭp8pE3&UqDf+^J!bl'9}?}3w~8A3Nj1Qd;*<!oc"/E;O/f i`>$;|oS7<Đ _"G"csdeyLoFDp§ׂ}6]Ӛ\Vn}XΟmxp ! x,Bؼ ˗Fss9vny8Ŀ~ F柊N gI%6CT'`[D3g Ivonؐv֙BZ{yar„1A#pqMgE056p 9XtO+sե#p4Z|0 Р'_3_[ py5{ 6x[Y<\hv og/v)~-_OږbLy9(i_j''8 :^pvTvk9Y:%0{3$5Zpa3%21o""wuxͤzzs-8fRۍSLA1AAdy̠'$I uz:IJ >a Q+S1 I2YB:SkZZX/!ذ駚c'Kdl(P6qفq<'ZؕpY5OjaHY ڀ:"a$xw*4weHi)}N&52 Lr'(xneSmriX*2R&hi%V'4B+ lw7Ny \]> O>~e7䕫vw$%JfcGӅ3OyH$l*⩥u8R4A0/OlǢ:$<| W?ޠxpp>XT)Sn cb)DP_H)pNH1Q(-s?m bvoiww/Z ཮^L~΄^f<VvC;[afpZwgg_:(>ka0*&+K:\W>s]&Ծ}r_F?%(_΁wahRuTgGٷJb$)cX8̉0t%Id(c Fcz3k|d˰`LuaAA6Nmg|c_bd |2F"E S!ʒI!e$aKQTe6aFZE/ 䜨4IPY(1ҙM c A1Aș"! @ –PuA8ӀFd+0?Kǃ!IJb_3QLX@4$/ЖJ'NxpE9& tHX1"NMɊ6<5e ڈ hoI8J'K )u/-ޱ7GwerL%5LB S\F9bBKd$ @D1Pf&xvЫj?3ՇԮ'"NjS8%`|4i!)q5#k!\vЫӀۧ!q JʧZ{vot5B5Z:p7G䋰N#h^7Fku'n%U5 8upQy4~,Pk9wiK)„88,JkEWpŲi_ͳplQK9$!8MeYpJ9q']nRXL9a!LalJuT5MZ3?ߔ?ǔ˱W5X#'@  /N$S*}d 0 =[ap VB0;#N`DpLO`E50A$4Ra =a M(iLK2C%24I1"ȋN/A9/"]ԍnnt߻/ [ݸ%jrs 湪zqB^ڻ:WE նj}d!<__j|[Pz?TU~mn+@?Pǹ|=?:m?itռs`xݵ">ᾗAޤ.e;ZwTy[n߼oh[oWrO|s?tQG V$&큢eQ,2y`;_Գr}?k(>wNKR8{vT//ݛ7-276iO㗅תBPZҵC^9eUZ\j0t)\$>10[2`Ng,V2Si;꛾; ;y_/]?s Tfy0n(vmvckBNJoGAn>EY(e}Ǜ%!^V=_ĭGW↭v"qy Z>y }EMnGN,#%XA. 홅$&k0z$u]_F5'oŊtQq|׾`pa5פ%k{nGurV z}_G@TsHW;s!.f)trru$EOM-Gmw&>+ǽ6<nuˋ Ly zo ,EǵGdAEGM:^Z*YKSF׼O9vyTuOZ,#HdIM>|, ʥ//'9`O_n,H2zp(ik$3u 8m|l !##r=H4IX<0&J=sJ">KO{X3,%zyGE!TJӘ+U\5Ψq+5J$ڔL 6' b_ž|im-]l'xy%^-bxvwzZ~֊YM "7r4|l;5A;5vr2aa)rnA3REצiW1v~6X&Lfg6AiTq*LRa )Bc+  D|P`xbt9 F!^ NC5J `OQD5J2&VC0Ab8 C0 rt3$ @!7dƊN:!V!F bt's*! @?#[%4әGT]1ǔщO)4S,(Mǰ`,MsF!^(9:.iT/2˔0J-%$1AA0Cs``ff3qSESf, 2p} ]mo#7+>aL:@pwHAr `l[,)=C[,j-@2VUlKkxZ[xn[F`6UBx5Ζ׹3EOD'K곳6uIE_vAA\LG0mɩy}^i!Rk`1*JAPYkﰇ#(5fovvN21J! 1B!F *2dR "(A@[f5]kvl2 Oa$RS BK^hZ(`wH)E $mPH!<#Y†W0]<I0'kaH! B5N!wROpϦůvvw7?틿*U6+5Pe{e@#PyPf̰aj__|k^9\%.N,_ފߊCۖGzr1( e|-mQ,`E|}Ljwa Z^L-z"Y|%O/J=$jML#-"&u ?}{kWYxvf?F*1٢(]fԄNSLz^@Xd48E*LmHxed edPHa1.R,Fђ!4-Y8XuHYr>B 9֩:F[%| $ܝOcw+f1-.vQ'.ku)ndUcu H'u T.Me&eAfXK* c`&} 2#Uym="QcG;x9?|lK-@Bh ˡ2}O e/[E43& _V"$GZ0*siX/ۄ3@Y`(ቋK ۆï)1( zp,50Z ƽ8fM3t{1jC63 d1Jgm!h[Pk Fa$p{vmscp!ZC?7(:C~ qڠg=p\_ kocL-z@,HڇB83Cfl;G*Q*xsNaq|U<, }G*$"[4DeQYD垐JSDUSD_YX!ȕ%(fiT:.4QQ^Q Hz dhQ ָy7%<1&d:yA7dXyM>3$Zxu(8yx6:ڌ;xd$ uЏY?13㒠zMvM1GfpXF['j4r$0 x2n"{ΊޢA3=,L_Txn{lQht Yf>1`lzTѶuv$ _"d@yOf)z^zA. _@&T9K ^ٔcD5'<4j٬ LþX>*c*"-)~qoiݘɤ2lFco0ɍJ[f~>tT ڙ(DuFŨXr:P3/g#Ay٥,kx{?aGt ,W٢èl2A^Oj]1_I;OYHҫ7r~n\tˆYpID$ e[[+dnQř]w֢t NPlYGT4eG].a낿vmyd'A;IQ:D*:ܶ]B@8pf$rsSZN Y01e2$=@ROa4N3+5>u~,^.٣{IEN_7`.&K鰯Gc `\Ih[:A/Fj0U*pFaж6D]F G%蜦t1$D<2CNWVμ>[eϪ} 2BV=ʣ<2ֿ𼲡"Kkl9b]wYeFE 1tG< 'ѣv ,Ʀ ׷Xߵ3Sqp+i&$=SMe4J|u3䠛Nۤt 7&qEpˢT=[R;xd)oha ^P YZ[!/uE l u N^iwGuIi[xuΦ7qnz< uy)ԣ\ùIzlS JFp0V(Jp1pN޷Hc/QL:xd'Aoc6Hn!.oFu)~SP~GL+D\0 3LY 7PY#78*цMdnQa盥lYcm[r6kŝ }Bgƅegz]Q;x'}Ç>n[,(VHJm\ "l<2bFUi MS=#bv;xd'c{*커3CΒczE i7 } #8ŗ;P#=(#38%KlCRȨc=ᢼ<2y5VjXQ8q-FSCK mi,6{yd'C0rb,0U,Op;WvNq jո[`wyG%)?ֶ 3& FZ,2C#ezz^)IXO ½ !>퇪#/8)PZ1Xl+q-Jp+Gfpԭ >oJY(0(uzQ,Sѻcv ϐ=vsbLc@/qɿf0>%U=\ ` _V0CSb yd'A㘾;cu59XpqBXg<2aj\ا zDBt(| QCN)F{:f"`϶9O}ƂÄ(H*0tX!8N50kt[1v^K%MS9J Ab:Mu3vouk{˚Uf-'Ͳ{mybc* M#QZ`X 1V$K߸wߖ(Sh̠ }zz$eE !EnpQ$B"aN#c$Sja u2kRԅJ@μƒϋg)wn;d*JmO>&{pA0U ߧ!*b=Lq}>э:iɷHKE:3RW8zOt2I2}W&u%u!"@ě)@fM|#п߅7_W,&j QZּO>lH? | BVucdL͵_tjUqf;wջ݌-n7a1+:ɔ߼wՆ:bo~[3m>&d9v/0]#D)) 3.EwEҿվv`/E88t_]o~?s?x>41k 耋zA3?zuIK޼ ǰ_-'k>hb槅_~= )䧧Ej&7JpkXfE1i"X/\DlxÅl5Ծ8EomV/?"i#/p= ثLW3;GKY17̦AYI+FD*[R hYJJ!qQэ<|v6,ܼ~ϻ>rϟcXnu)dj<W~<]N&{O7~A ~ZNͨ,Í.N{hm?,!rѯ?tپ?V{l-ۗ mj%YV^tEzJ]| xr}ǫS fCS7F]g[k7O.հnvt'rQ|3o!tCW/5VUWXiP:G~y܃,{B{wSuj&6Z1YwGkEVh;\Y3=n>~ (~%'GCB7M3_]*y%+ leu 5v>ڽ}xo~̉*اaP<S+C!JVGDG4> |TbAo C!,/A!`Ʌ?<$a )Bc>B 'Ʌq@ ).%BxExw>Jmc(>M-|D#($^b-Skx1M[>B ͸c\//8}YAR8s5HL%5wp|2@1$,1+ {7Z3E5aREWP 5U$!2{q8p|Z`h(wC>ovS/N߆7_V1X|aGo oHC)FЈ] )P,&RioԭK+Ri >B O~2t@ נlxm?qloQ=[.GmƘeY'UO=زD;%gȪ2/ZK~18#" “ҘB)B,fSk!UQz(dK2[)4yjJU^7ZrhaLso/ zPctG(({5Esx.Eh<\4Z!\+v)f7bK1ۥRv)f] d^4K%{d^}/%{*.پCbLclMfmZa`X0v7]svJ|~>O",!uWwkJ*Q{Yᰳ %TwvCWWK`{Ѕ/?@\E_7)R 2V!m !ʠR0 b~)$$=\ԃZ+gm/ug= >\؞;wサapW)>pPU5zuq ۠oFh6VMk+(./4I"eY ;J `399,rJeUVn~E9?A_itC5֏޵6m$ۿҗlnjUWnRdoΦjw}Ul)KRJ eIΣ{tOd2]_ 4md}zNvKۓ›+qg9WT?@3,"4}`|R(4piye11[G+[z97 h4{d96z $ѸD\/_:Iʬj_^]r{j,WڈQ0ܚY#$gcg& mqr0B H%c[2[ζwizՙ[u\Է?ԡՌL cR x*Ի<\4NFjd?xp^M|6L&&\6䴜g ,Z->N7eundn=xN=fy,ҩF9XF*Tb=_?]z  &mo).〣:2=51sS2Tk%ibįm{m5i;Y cS{T[Ͽ Rlc>kl^wlF:dqdMpfvUpsn?DK=Ar4[q 8 mp[?4zZ2),E 32IWS7FUP|x!a>nzqûlbӂ{m1S t%w<+prcDTH' 2D$asI]Eč uRyS,)9)EҨ,cҌ[1`>C`  ÐY;td4ƺemq ʮ{)%Fd 3`GA>iiTQ籍~:1;" *jE!e21VtϒEg Q8t& k|V@Ϊl4TsM!;Ч>Y[㭐uy<\ Gn;ػ:$P<+pG@H`怒S;siFht|W 3[mpf&Hˋ_o.0Q>xKX , eMMPs~:4V=U˾VUSvW˔zxzu]n|F͖+Kna@K~r^ 6|>\NV/$JmV2t=v[} f~N"VFVKWL!+2KH3>(dϢ]װ\d 9U"3I@ӌ5KNC(:sTS 0{!8Whѽtn_b`k~5``nwV,gqLY q8)מGX4ׄLHQ%6wezٕմ[>:jSreuYؖ핓2q0DeF~&)~F(jK1qFCP $AAkD|~g>;8R.GJH))%"\rRĞ aZd`EV Y+"Vp = I8dP 1ן`%lޤּ\,9{+57YYo3?2jwlzeLxJpʼci&J 3J_/e/W6RFMBL5EAhd72߹k?2 qDfRo8Kf$5:  ul6Ue|P;>hxC4\|k<0IxT ~@< ,KUq"#D:R6LlM)R6EʦH) $1SEPy#2h?txN01r2"b<@Zei;?áhgD;Wq[mZ]Xv2ƝHe y'h~o]$K,q`xuQ ʶߍJ9ةS@G(qaxx; n !y`)'Gq`E=?xd<vcB),ՙw(܄Df3,r Ƹ%bSРڜ#?"D4'}`[~(2`b-7 ʴ~w3m b:GL ,]̦>kj'9Ƥ9$eTKV)q "*|"=NSr'g?'#oXJ#G<cwgjh2dR~aa:⛧on*8}$n"ىpn*#8ݹ೻cYm9(ڇ#[@AaQP/0FaktGd>猂k*$T|FSCI 󚻈Ȟ"[Y(lrY=y KSacaa"k'%%\2Nrd43,%ypf#"V v?$&u=i [q\KQyƿ/_Uy_hOŴk#Bn'az#4NrVpGWX]Xղ"/qkğ.9ā\؛'nqYS5o80<;^ITHT>xLt޻4h.8}q\*D{RL4(MIoqL$?Ms5y2 [10G&6rۄzʖH|mJ^tkiZR/ij |Џawᵟ%!e/Xt2,MG`7$XPQxY輓1Mڇ)W|zӋ ,߄}RAwed-hW>ۧEc26eywD!z}^čB$ 7mThՄ7܁, Db%hSS%uܛzX&xǢn'/^{@LCVߔYz"uX((RtmheևVvVܦ'O%C/I"_ጼKN|&ii$)}Q,OX$Yօ}.FyD)"5VDH)R#EjHO<8Q6 *;oYa6f."kDb5F2H\#k$s=:bW/G2׈"k$sd5F2hڑ5F2HzW J]_XMw Ttn;\=ޯ/v } sGA %;%9Խd1=2P*i{MYfE_Xj`|kKww|VEWR0ڗ\(K Җد;2b:'5eRT;CV? >{"x#+fjXk)W6S^Bu@p'B{puE"(zJb=zȥ<ȣt3Vʂר 8̭cC(w6e;c˄y'hǾRď|qq9=\)W gߍ~%\汷.e[d-:5W,oy[H,Nd2e\~r5³fxH<ppjN.v@(ʣ9dc[ >U#\?0;ɤ4vq3S}Rƅ$;%r\yA'R&eP=!Q&Apm O0J)yµ8q4rt>psvCm=A߽wgiv$~~;n*OCK6\Xb gp}:/><]tǣӆa>vYlbޔ);~QX+Q&RƉgʚ9_uC 0{>8Q+ȣןnmjiS6 UYK._feUZ.1!dDE9$@|/e;"E RrHd f*P$" (O Jcfț2Π뤈zCXa50* fI RJB0GBAsr(^kM3.sLZvDsK %TRk- ?q48H!;FsɂsAVGD6Wmnd We <:Xָ)f0.|wT&şͿyP*SLH0Fa[PnvRѝd09*~Cqnc(AFH/.eXE줸E*Ba+pG`8"a:8RL݁0j38}]N?VÀP& CRݕT,azNdZd>?00CiẸ0*մ"@>}J24~UWy) 9kRKC/AA30W_OIqcLӧe0]ex3FԼ37y3T/v)TU Gz'o>0|A%%Z)1Ҡ i%1j%,@h}j4j $7g]-h=㘷R%~AQm+U,Ja�a4v*gIm\]?mOҨi7l>.\[PGo߼{~~x}O;})&_ǧ޽-7¯u@ :pAZvEHS& *QO'|!]!>lfKss[ 'OMrv"ݭ+Ke&~6)ҫ 7mz]49IA\Eút]}%qCv/ T cI5KNS(x΃t0;-۹3Se|~!39~LF2+e%>X/% :PHGFM6B ෹+ɮL&og>zg ;&B VC5@jTʝSs*wNΩ9;T9Ss*wfʝSs*wNΩ9;rTʝSs*wNΩ9;rg/'rT+Ss*wN~A1A+OI͇y4(^yMѠW9Aի_Z>> _gęark_BX5MK9x}ʐ6M],ela4A_aoz*6ZF5? ‰X$@\AFM=NcXPA..u,&qEǣA1ˇYqxz ׳]5r3f)P[<㘡f|]NF~U6@\<+FPZ&]̿h "VY5gGANj17k&N}X%إHJv{]\/}l_[Z΍:ܢ!{I܊Mzز}u:Gu]IGfOWM6z(a zL2w]^P8⤸O O9\8^|.zyc ď8=B?;&ͣ}H;qyՁTc,X|{q+OUInE.n-Vzp$%KZ" YX>6 =! 9$B|Iս|o]maf#[-;eO*ƄcdFh>!:89'n%k.Y:ȈT.I1z?V+}2Lk[ J0|A%%Z)1Ҡ i%1j%,@h}j3f1Ѱ=jA7MǼ*AS jXRdxM5$إ/Pb!>ŨI}sFęFMc ];~ woOw~8Dû`N۴uA፻Fh!G]z3TI r _r-tW[YRDIӫ\:gt= v*R~j6weqRC_W 6Q[b. |B93,z'<۔J|^Aɩ2/@ M!jPTS 0 s`v9Zs/g.ڭC;3gr]dWFK|^Jt$"m%0l/AosyW]*L./6d|x↏ЩOvkcYc ۴'ݕԽ K޷!!$ b r"wNK )7R9Dgs.=P9OѶN>/_E a#gB;n{)׀փ ,!BNQE[J'06h m!MX~[,sO1h5Lh<}NҋӼ795=[ʖiQ|Tl@F  d"^ IPJ\z7a\pyFpc$T3N8 rvTKYDP T)lHNGx2pl$v[ iP4l0a`HdDԈ @C*嬳)g/F7saT\Ra!8REb4ky#p$TH̬Yl*5o V2<1Dk(g6 qmb0fr~t+\QJV;呧I)Fg8:!Y`:u@6$%!Yqt8^Vgo7Mb"UhH`4l&o?7ƽx8z2I>BYL^FNaV?{__@PFFR<50kG$C,d:R,Sn Q\r5̲|lM7:-F|ݭF] TZf-H5-H+Yw.!ok۵ެW[6Zʏ=_C*b@ Xi57erV1(h/E|d:Q|5ֺ*B2Ic^%jrCKP f}K*ޕ6r$‡ lIy ̃/060h٢"eR odIUE1a[b_DFFjvh ?lbVfov?'lj^b3eXE߂@0uڹ>{ЪR䋱5nfu\ŵ '.F,B0#BXYL^jʈhA #(H8H̶A+ø4_ F^b3ݕl,4*k: @QAsglrD` :drxs>?VΨ|v x5˚5p]䬴4:THŬyÙb` fƐ\; %:fC˕lZ%TpL{P &i\5s2)L^Sv#*(`F l>imw1ik僶| \CjstQm00hkej a5 CaG▱'Y(`c`\b҅Vpb#VՁEP)]2Up;oB!A3@cĜE NVyAP5] w뤵J!u&d 3=/֞ my5ĦjkO卤^b~'~ʑ~Q:|}lz0ô fXt8zU};&}S^ U#Nzɕy͋T1X1? rRj/9] ZrV8AfRUWfaԇr]tDu|y6%&zqyVO<)Y|^tWPμEw7p>X+94+@gu,BbVhw Fnkew&s[l<4X^1y+XIdBzar%"H:|P$Fll]TxGDJNcRYLWfi[!ʓH&N <+%CJa8/wP^88uEX?]s$)%VGBa,(5uQ'(D)֚v<]RMJ)϶ ZA~)Y`H!; FstHh *G55E!crY>'Jzf^Еw1z }·oMnk; ΢w=ͻޏл3C9^p%Encg(/ vHIns$SDn')(ݘ$d s?[ oG-6`M@:I ÕT ?0P0Q1myAKtyemu1ɺ^J"/7I a!#cإ'7m͎!3F%ҨiX8{s;`ǿv~]ᛏw>b>?| 轿e盺!(|~8h0W]?Z[tM5la9g='x~u)[!>J* J7CmQC:.ϟұ׋֜;@ tr/1/tMƨ+rB%MH=>("5__.֜W[9n*a,9 |irEO5yN=7fQYyˎn,4k~?G2Ń3}S8`$@#IEJ5IH%@|(P vNeL@fg55JWǭ3QlT o=3cVx;*6 Y0Y.գҨTCЛ<|Krn&PBT\<8~?G[ 1\Bc ୰@o($bPJ5},r-TU|ot{ZQڦ?a+ wK Z'Jci5ΦT,--q8ς:+N,8`B d^Apgw8̠D≫:4:laHc< \y4Z>tVU~10f^fE k[=/`fy֒^b/?l%Qvq)gDդX$t;Y$rL'n:(`02OݡpuשAK (9WB6-MיMՆSʽN3N)-3ۼjb$l9^˶{ 8+eũZJvĢK٘FG鯶kⷦO}/V7N$> W\bȶvBY2f*z+UBs {:YW"/3,r\ 7rs|F>FS:W2SDe+Ř`Nג\6v:*Z |H+y<݅"Rn݅ $ t8;:,vӅV]j6nBIjϻPgH;JB#<_P E 6(a| V1Pl%3 D%3WG31*y]y2Ga AHRA<570k5f 62b=6MVHK\c\ڪs*L_}amM`B~嘹b 0n6̚msU (ôl3u$?k<ȭC{9g5aܤķڃBg7|gG;NSn#o]P[+wv~(þj[<^eMx~5[7o|GљfpR~X}~|۟;XgEu /8=Ifwy&7|O}s }R_zaar" 'Md닠MGN^t%@ `:?sR4q77p L@62۴Lz[cRGgj6cwdUơpm:r&F1ʽ,BNSj8b Th[-GZ鄒1 &*P j4!13ˌî]-'pvi=ZO4K4/8Ye=;^di&RWH&ڐ$ba@@)AsmqIAZl[$[-SKYDP T)lHNGx3pl$v44(r 00r%GD$2 RjDs !UR aEFrm8 " maTx|.eD\Ra!8REb4k[-FH MY;)Oo 7LXPGe-@b"5`3S Dc61I; j頳:;i*"6F(Q[GM48YEH1 'xթ7 ) 6,Y"WR"F{E2BXgz&%]-Ac(wli0|[bO~r.—E(| y{@<& ?L B5h)0A 5ŜcE\9c Ɩb 8A2K3:A'I:3WeI? gDZ/6CNǥ!'%H'.D&X rTM ;j$:0FD xrMa 燓Rc$0])ǝ4?,<l7*W.Z*0t'M4uvkY EŚ]ʹ "QPτRn&x9LEw9lV v:b F~+b_-5,Iǖ".: kf+vغ~8[Q7:DV/ ,Ys* b(QaH]+HTQ)Ick̑Kaqd9@ rH# b]Yo#G+>X=4Ts"ijmodPQHS[*󈈌"3KM,(8P#4Z"K -D B@e z?Ӛy MY=羑Zı 4|՘{CDrY#!Dc[V[\$.iOu(EYcۄo1ns{.`W4g a\;΍cZ]],b ~oNO(?ލXpq\e+*\8qŶ!81xA!dT^fOJ"(84SYd(88H~)oKW.K;`ԄGHG$kYI3YSz#w`Ҋ2D:RX&x4v<)hsӋH.'-HR9Cb؊^{-THQ N[$YB Fa#cL9Vgw l9@~bgM ٽq(4K<|be4&I5A%@XDc^&|Z@n}E#, wZ0&Sz 4R%̵0u!}QgK,+cFP$aAq@ryͣ(g)W*=u71ƶ,Zv#iHks]kaP4xE,y^%v;gBVoo[ sqopT|?חEmK6n6>EƟqW~۫۫FR#-CQ$TOшpՏ"]]^ >yk>7#$1'`T@^aAN+BN(9 p:8Fr$OD(5^7.iJJiʰy:zV3˒SG'RMJF )x. >)$L:E(X6tumq`+ Q㺡6?}ԕ-; H;EF QhxOE[g&W \A"7U|]x'SAL$!Up5dž*Dh "ZB CdL+0 +HfV uAd<!<r%J 2Jyq|\W9: 4߯) 'Dt,1wXk"ew>DA(&7=bpJ+LXGjkowxnuͨ@c$`D]K4Awq~ 6q=28Lˆ1F 46O9WRZ\1.a~u˔2eefwآ%-Z6m,=2۞D|δا2To4ͦ5Ap3*, pvYGNBSk5EFO|KH BPL9L"}j*Z3 +x>7G)d2f4f[ǫ_P͉i;(߁j-uڟ߻X.;?GCB&zr X A(czfl 8 [8'P#FkuN_,U 41Aoڴs[+ru9!'6w ؋ք'Yσ}j-[*dQLO7?+Bʠhg^EkEke~V:Y}Qi%`\ࢵcl2/N $khe-$LVKֆ* QB4Z_i-mi ɶ[-zu 8[ipDHӮaXsj`5 sLp/6Ak*܄[XsjL>tKn;z&{o0{} 6<8!?AY r8j{&r|ꍳF o(s&Ъ8w`V')IB2`d,UmWǻuu(\978r_*`R^-,R}&R-/ <<.˷!X8|-BfS+ƿ -B9?O?~SܿL0Exd N.eor[Xю\FfmbkY^eq[JM۞C@3ͨfFS;6x);3rԷ)9כ5P(|ff#u}u4mXt1dyir*G_]}oMqt\j}Ip)更fp@=iݮ<5`ܨ})kev&>Qid*׻d:|G7K.y# 51Xr0c%}l(!:߂˟sA_ dw楩wƬn${ Z)-T>-E{msE2&V>M˃xH0o_/ٙI@PmBYot2r5:R TD( FN[Ho/ WRt|] ,ۉl0m:/_gR1e3-þy|0EN93XZezP@">Vrr 8M(rnS1bZ)"V_"D4B 1Tr..ͱݞ$S}z?=dqGLz6=ГLcr2w\n42_"gBB,[JL.0"4r>oB&0 [6x!IRؔ),1}9 d1JL<9P%nbi1g;Jy;sަ;g!y*rNlK1pla㛧2; |ys)]8 Ov&HR*B<  ,F+ykJ֫)//7}ɎA<թgހoo8UO\ƖsFdn2βZ2|?V\;{SA6bY)>Ope9L`Z֊bRn>z$_44!f_%\I&s$7sTD;+>o C/ A;iA*z~ DEv?M$'ؙVE "FJuI`x!Ƅ I*$>"\@AcQKl`xB S"eKy | \KLR5Z/p1>ܧtCՃ4AWǹwo,-hw@1~x-?|]sb!ՙD#XyƢuN"#R %3,9$0x܍oSY^Q")9IEdY@afXTq  0B'L6=KJʪ2X [u`Ck vU@̒Za YPk`"SOPRz5Yȿ@Oa$IhY{5S1Nǣp_rXD-b_F ڊ4+̀1k- Bi !; As=#&G{%5sj$Z-fƅ?^e??K{`S'G+/jQ\3_*Wpe*/OfM~YmoL&+c,\,ch++ECvLT#cXA SD.f]7v|wQ,@\gO%P0 *:mqk 9$U<]Ijm\׉¤J΋ŇejR**4w̔BlQoZ_WWG9e$H#zeݭ`ȦKKdqJ,mu z27J#{k[gUd_KSߪ'?]|?xf& 1_G8}T-իmtETo0y ȼk*lHok9 Ǩs6a2-Ӵ陴t2k6X9-,We7JQEvڵVY"ݿGOӹK]!Ib_lZ]sN W&̍fw?c׿5&~q;2 vMAᣧ&x0#} 5CSv "Uϸ%w}؊-Un@~zaTt{nz3K\\iV+əo6J]ڬY*K"EL#`U&\qm| 1Qd !*a,= didO5y=7QY?۞ȼd_štI:15"#qrU{q/-ߩݡŬSNPDJhKIN$HGFM6B9E $T :\aq;;|;akh&f~p[[LTl\Zk!V-"D(7eNE'G j/{_mh9.% lWFiR~+—41nPV&/Im+֩S9'|fu*.`tsh]?Ϋۻ;:]8wZ2unmWww^C=znNQw՚$g^P 7a;\tv[b_f…yʘ?i?e29Y MkCS> ]X O|7:qEM9Y IKBy',OWDH>S5oxxyeϿׄh-J5R>RLSNq飶^Ȍ >]93^{-a=랅}q~:rlU4 w[ͯf<6AT fQk %I.P!`V0/2ZDJGt`ZD"rR!+ȅ4*TPu4H`&$ڐ>r XxV!!: H[XA`Ir,aKi vJN;k;St&3v;?>aFZn_S.>dY]kZ +N.OUSD/89 s 닔HۇK*h /2myIg=8245y S`Mkv`082Ȑŏy{m$^^2eWxcB%cϭql>zWӗ#D|g@Yijoom9BF,B0#BXYЫO0+CʘtokO) Gv{cn(e>vYKecx\N/D%Eɪ}=odA;ݡ1-)O6V=S45| ,8s`́11Nv33O[+|NR&(7ll?ɞm2.taN1H$L;,zѨ|U|s?oaG'^ز1POVE9ΉWl>RfY7Xlu#̋%JB ~s%h]V|f[^OyMjU?qP]GRfxw&xdfi4kRu`* ~["D ڮW[T~ΗSͧ_Fvh-w?M*#uD9hkcQd(~v2q<s,_gxܺZUNN.b:͚:z)W2:?Gb]Ds`ӱb=.#<@rFϵ,wJÃ&^2%`[1)Ŝ9s˭S.2eDH*{plO{z:E1a# Rd#3@SY1t@'K:.{ Kw_Ok=:B *ɵEșЎ(^ 5wZh}6DI!BVA~ kJ+PR2$PҠmPV#I 62#$ հ3}k:{\8^j,~Ey f>_P7AQR*-RBB&6$.X`PJ\z6a\pyFEd87u<٥ĩ,5\FS**2pl$v_ iP4t0a K)%"MTR#*VW B#"6 023 mmdgd߯jɲVۖ;Hvu=ŪbP2g68;r[.>@h[R fˍ*YɌݓUDTIIFh@Jc  /jou92t(Lh$0>x)xL"ze& t3 >rPzS|2IGl;HK%d dI18=³`HK' ܓ"iG/->b5fg_{: c.4Ĭ2KBoZ$ߺ5k [Lފh JA֙ IYDJ@-i9砤$z=?H/zv(E>_tGa%\b@cP 9[RX֠Q5`+eճ7y(:Do z5+ kԤFPOkcTZ(ˀ!yQ"`4/lBD.vHVƇ IW+QyE v1i?ynyX,"V HNzÜ2q10Rb$X-)6dx$5_H{Um\H4YqKvtE Wq*qO})|WN/+zbIl]w(L*BU3Oii˗~~~yq=y<;VzP?k*U(Uо*:c&s0C?]]N$mOx{E',C5R?[@7/i鷦jZo uTETiS7Kʙbk?[uxb"p  sRywr 6(\Fk& 9^ )c^8_s3~u2e]nu|Q]%1ʜ`Nލ% ɼX䕳VhQyme%e<j8x>Y?VyoE|#8Muً4L^,=\&Y<٤Y"bja@ + ]J602P\Q9&DIӔ< /KRY"ATTUh @ XR}G5ߊ)Ĝك|'-FCBR)╜t]R*-1LЖcA,|v²I%x.Tqr5#kCYK,MjNXm+#8e=Vݛ1f(b"o]؟i f퐩 2Zk Ӈ$ɩ:bVR$lm9I4pS-MR0$Qt4 $T&] "OLHA'@kꕔL113@q@?46%tBDR@gug>+.(:~G\Y|oD/xqKp1ï}Iv\y?/G~M~Q* z$qd[> WIZx$S6 m;2c θdX# Xfx&ߎfsgG8 Cr}d GE:{VoDqT{' t^9 Sȏmtk!R_wOMzx~vq~"S*`dٿ9Zѐo?ZrȻY?%|k5֮-\j~\1[M>tA[ ״]s_>bq'84󶑜iIG:kF9kfY>hNkD,T,tmI|i[GlˇlYs6.1R>9D^ptGEl#>:_ss۝ZSC|(0HQ{ k=} M-NDXO˸#{|58@R6,ߖGꝟtjE<. W4J˯д g(6]ԳƬkg!NV6bf+9ҮPu>6pDǷG7mj;~F4z*YRJ_fR,l$HKZ@*v>1Ryv! ζ'6,eZw>ŝFbG\A$ Z,fE XS @g6uz9+񐭝FyK z[͝E7&ujkŷE]4_E^3~Y{,D߭6z+5o@InV׍ww^$YIz^kwwoϟOs7uIWwtܜ>OYb/vu)Op¨F & ܩf#9LВlߦ/?#R&z?ӻ|{޽dFw'EZGw~ 8֊k}7nsuڛ{w փ'ZZch-PFFs4,SA` ̇@a!ay?bN "RW TmgfL ȋू$L$.VP'A9IQ[m% |(nHI$~PH$7{ ~R{;Eg.6ةwqoKK[D"*e=i1@0m81>i7dT)o;Π`ZG'Hh5o],_"4_c48Kp= \xuD&nm,g2-v`/eI/>r P[{_(w6})&ULC̚,p3 qíHbf'!Y+&(IamZaRdܔ~&@;) [2#=,vD[ -j;OHJf-5EJj~o hIQtfF\xShYo9#?.._ݽf6 yJM2r@vZuFmR76=XقMeyo.7*-e}h'Pr 㶃[7 WE͆oO|o#a!Ȅr/K sAo%v$2g<9ΏsN# b0qGg5<^V`mTBV>%1Ԋ=P7eXV=Ao繴4yj'Z;%uJ֤:!+`Bڄ4LE^m8k(q(%B@1$NڑVAyc%'ƣk_j!(ei-ɵZ{>J;aKC}cqp>K^dZ8g70L{gi8{*׍J}1;r{Tl'#,xtL`wc9<-8.4afU£H;I+vy>'jpfQcZǧ!y5c1spOFw-I?TW2QJ p$F[ e r:M"V"X2P4T:ᢦQ{#)&P1Ji#)IH\%cxbd^`{a-cʐ(,Fr-璦6eV'3h`C^Κ 鍜ZMՒF$(_8fvp h)wGulg=fctXahCG_ pQ $q59>@d\DN#FJD<>ȥ4=ٗ:Qݧj SO9Nʲ8-}^1hmPBi݈ hz Hozr ˕  ꣓^o8,}i{w`e¥T^$8$è{?z` ZrnMЮNjmEZBZ)rH v<e5V^\ UAr"}]0, *0ԑ#+MǂW7]z XZIYBRJ@d霙Y$+eZ K4k= k=7ڝ^kg^GW||ϓr>Y,n6lnB@=NY*4R(dtb%)h"(I+p\ʥFT`nW8^|AGU٘/W2kYJ+:RiQ\W7"lc8.0"?P(/ӄ[+4'4Ϡ# AP^/'BwPA1Tkw6vmgIEZ"(h{{ 6-~@i ʾ[y)kϣo d%Rm/H(78*֗IFfT2xrPn'ct9=i4(#IrU4Z)I&J&61m\si`&0%!EvǮt֜8'o$U -}Tc q _Ȍo[ꍴSĜĹAZ)@j1 gm!Ph*TDyg퍜ٜoI׾\d{ eլ|[7RHY5 :5 0Z@kh }PpxMR)5qJMR)5qJMR'^US)5qJMR)>&NSj┚8&uSj┚8&NV)?;PɗؾSQz&NSj┚8&NyimlMRQMRTǚ8&NS*ii.q[ F/-ުp 3wߦ,L݋UgJQ_Z{dZ4X ga0cY\N(ʹ~zTT& 22!U&ʄT*ReBLH 2! V$U;fIYfIYj  O,I@,I5KR͒T$,I5KR͒t0ezPpzvD#t0 b4\ lʩSy bzN9ZFp)uʡ;e3c)s3`9s!f)xS$眨?>"%^0.kvI;`BF>:Uחl#3SiAK,Qn"uHmٍX)"ƘvB*hkڤdrT )g=+^ >~*^"zI?ӯőx8SV x_ejiyY]A7 z3V)5waڛ~^١*?5.gY h2A =0Zb?uߝ'8^@ߑٽVS*GB@]Ar konqgt> 4b9 R}itwI)ѼZ8;']"h騣tĝJ#?Z94~u}Qo\>Jŋl\uvr9p 4.s^ rr:ݏ풓7~4NsaDmOIRucxcY f~,XV73&:j}*:dS*/o\0ב`(o^6N n R7TQ&~Vv5ūK?tS߯?|ۏ=w G`l5g7_jhGxg lwy-]>^c*.FAxʙx>)Op_5*lK`Qͯ'I㬊U*IRuj?biTF|?_/6GωcISDjd29/Aœp Wb'Eϲ :myQoq)Znr#*|Tֆx@EB `Dv\2 )  g:lar{LmF^GCg2h;ҙGSxgu~jl N/HWqU_:Lv-|}ۧx.8Gi2qFs"DbgU\e/y'gF\W!Y,p$ Q;w~\Zq҅dkcRbD iJЃe DHч8d3; ;ƈ5gGg:7'pLFwz\qs=\q r {[פ4vDϖ-UVݫL:qZtvoFnlڸ"n@/mɎOsxG 4MKi2-zxK2[bl*ĒK^}T}Xvyfy 콻.2?ݜ Uz2Wzr!ϋmřqߗa}9o8mgCЗ-j6I٢k/fEhs4%qJ$ Ny*-ʫ^mcݩ¶J3DbT@%^J\(Jm O!=/J[RV.yIk)lH%dlH(̸K `6Q>SױYsvGGr3~{ ;"P\OoFWgxsZ1;YD|2r̬:%h2*)0o 4I96.R5;U[v 4dICvmfBH%J>)[Vsp.Bc ź) VffMAcjr2V)} IbIBQ:IVGUI"6FpG<ฺe[gs9S,\~Y>g1/8/=^L>fG㧿(e7ʸm: ].uL"K0rp(r_*Rx#y!h%~PTmA>[-׍%&4wLq$#wۓj 2ȳK`woԋvYXU|']~g[E->4 ؒ| p}`e_Š.24`1$[`K}>gUlrοdނ0<[p5A@W,_5L[Djav)ש;ִy;Aۿm W1lt21I][4G_oqɋļwWb>ڎ;pcs{OF髭'P?`gڡewiؾ2@]ߌ~]ޱbL_$~l~u;N[ak'Z}ݗ}-GneCxݫŢfÎhWNcD!F-H] -ed@[kH%RZh1:y LHcL6{ꢶuQh)ԓm7ClOs-X'kjnV_LJΖQu`sI`Xn?\й}G61?m{e FZV߾Lm9pq/i C1`C2у69#U`s`@T.S!mlS)dsRXm=Dc"zd^g!Z%k5Zh֝2/S>:M ƺ4ۗqoYLmJ:w*iVqClc_(7ߨq%I)!D, QUJZhHg#kT1P\ ;B՟9N%_=#jM h^gTBd$ZIEp!$BFQflT U@9vmJ uO伵HZĠYwzl_V;Ɛ<@|0,NZk ;_JV>U-d1Y@a@K|<&RQO,9EPLjl4**1j%SV)*X,9Yr&Bj5Zr2Iqo0K "O€, d $L$Rh8jLkξѡ~p3sd+!Eb 03J\ Ods.h3jf@[ۇ)u5Z'zv,k&"ȶ[?zt &$kQ+A!H  )PD[X*LBk\05ܤ7Yrۥ[PD`' 6 JZ<$^'lƒM?6;LFzT9Ɏ 񩠶 XsЌǬGGyX# b.RVxuQʄI(Tcg+|ElJm1RimƠc H[jX`B$2xE!: ]ZK7~jhքsp@+l{;?s∉C,Y1P̖SUM|sy# szYO!҅} Lc ->oa!)f)*%A-/ژAiJ]:3:@AAs*K Дb1ۋQ{NfD !,C R6eR=6Xͺ#5B[]#'ZpzkUveG?Z>8]H9Q3b5Abw$N-8f B;2Gʥ}cI=m_Qۡ(ݹƭ}UcY=ڿs(q0*m)m/ry}rzG*O\#?h.]"doS~y?Mbn~>{vl˾g;xbw.]}\7K{5LuX-_ Xyö6 uLiZHu0ŀOhŲ^Ҳb &tmj 7Qv$tD`2RBG~`]/ugOڼsZK2S&h^XU$!C&t)@C =k=kݔ@/zmIPd~[|USMnfлmR~zOD 8~qtYϟgϋ No@KrP.:;ILS6ih[0DD>d52`ED+@@!Ylt: a`AAmP YzA(Ib&$k."Qic L$<5`luiڬ;} nbD>Kn|ZdAg?!:/r+3'D:򆯷\or:c~kyDAq<}d$j%yW4)eK6EY"xi\I@JʌhH3K?T|E%A)LA$vB?ffAdRL02pTf^HYƼYw40c]OeOGcRqʤ5 EeˆJD#CvG B'"H0q*NIs2IDxրǐggIkC621+,9끺?Yߪ;j5BA):|HAKAP&;FPAytж&JN)L6 PW+FD"1I_b*2#%WޕKPc1Yq޸ F7|ktL35;YH -E[3L9S$֑-QT,'/ƎcSJk_߾t+V]ܱ4,>rZgB55$.gCv^#vE@gKޫhdBҶ@Ņ,Y8yy ^d82% J&gÉk3?Mى-tTOD1OB26"s̼kh"`; 0Dhٵ0{yݚ$wfj-wf[_iw3̋=m-Lf<9Id#Үݒn4ԇS_*Y◛_:~ ~X2x6_|6t\ȭ1u8 cgu]&BTQNQg>Xnߌ"El<iW{Lua.E'RgpkgAtAr%5λ7;tpc:^O_,ʙ~=:)/.Gߋqǯ}cTq9o7F;gIi6edCWbb9F?+-(}?,?ǂf߾Pɛi}[vWPސB{Mf)K%+yћэGݟH;e?~ɘ֛Q{+|</V\&cVe=%7=@YenݦU>Ͷ}0=irpz*afM3GR{k3xkZʥkzfz,?{Fr H}ȃ 'ÉaZbLWOpx!)%%qZ}-] /([]! BR79:،NuyO Z6[J OcӴ٪g|L|~);m5lSr[SW~M ŕ͏W|DwmS)xN|Z+^ĦZHF8?l]KFrL&*+^͢e@O[SٓUCсS+0õ&|43͋!qC\X lXWjw7W Vh|5zA [U0+OJh O8NˆU8 Čq]--ۧYh9:@>UBfʍ2Liƻ/Byq>mXX 7۰N3Ǒ^I"Wl5-CR;uB w:w'o8|ٮTF=׎Vw7p$+C8^y\(a+cJ'q{$nBQZpPmpE' cm$W0x9ʍ(2Y֕j 1 D3=X^s&5+rFjOoǮPu}R\:'u |]z WUQM쎪]x'ӦNj}Jg{wrk kU]LWkf3 )sXsjVz{8zG2IhfS!dծ6j^]y#QJ!u7=bry ĶP[x]{Lb+n4ß7]n< 2/}0(5O/YU_zts+؊pyVŏapu$VF ]򕓚W L"@sWJ0N\=ϚKn}udhH}CǻH BӚR%YuL3Rir5)(jO 91}rA7zN78x-oq}R8wN=* 9a4JNC-5QCl׊KMsx,BM\)mA'mp"+dF@!@qb@瞊Y^qfy_,"-.v+ Y5^9!&QrJQp! "@:Zky_e^ZޓTE9%6kQY+#\p.XdRdՎa8c I4{ RW\J"J2 4B*cQ'Q cIeQb?9y6QyeF^ __? {S=6J~~Gz3tͤ7U | s3G|`ڳrCbgPpgV Nė"m_9&@Q@6LF 儝] 8++??(`dQUB; 63@\-qކkc_|+ƳT}Y_'.B8]#DkY,G2ZXb ?RC˖W9] s%nMp9jn e`Lmx61⏍o'4>N/> AEgoq /.gsrnWDd v brG0#2f7!6$斮aHJ:Y],ƓF>Mf.>]/^ ['ljX1l|s%.?OnBI.szF)ǿ|\f[*U&1l8=%.ǟP}ӟ~9_)3_?ϧ;O8.ztvT,9˧_д.47oia%GN>ڒChw|r50,@zqӰ)4/|5W4<_n#̯f U4Z+ W0 }1Bteɦ.ps##f[c2sГ$Jw=7y5zlF,yd-%xQHd|m/nm˼hqxܦ<4.Hc|,Z3Q#!Nrvfizؚx`h {=OٝM}"C^/U/!͊kBJIUwj*KJ[U*.|tFP6Hg1jEs29_<$ܲu+ktN{Z !*#%rF/XƐZ 3ףTsezJ/2L v=V~B }UvܛϛNhĚY2tk8)ukS9!dI["(nx$4/A)TԒ8kSVs[0c9Ro/ [:t[!F{Ffva;"Vo!d;f&q\7/$ln_\fk^ |(+A| @q>޵xHuYhn,_I+Y\)4nԤ%J;inJH&u%%o+t %z\eD^]!yte|sh7( `5?|O<~^S(e' PfaK>U ~990pD1O=?n8}@.r7#VB59_%)3R0 ~2 jިluUAmYzK"ed8~/i.cqMlBeZ źM{HO)jhIVE1+Nd1W!^0 7I2 ¨M..@\sn`Kmc>78#i9@?77ɯlwT-Ҽ0ϗh;t@ny7F (mt>ۿ7QS̊,q$qCHG!H#CmFf)L"8S1DS) ΒV7ym|oɌjar\had0tVm[50=.O =0{`04:e`xaP, C=&Sǒw*tWCX0:Yh-!2B ԓ ;%)f%&_;&e^y IY$&bukEnqF215gB}qoQ];;|XF:_[TK܇cREeX&޶Rϖ9Fq44ߎ*vR:w;ס쫝]W (oE󁷐ᤀmbu8 :S[M+,&JxJI`Q޵q$e/{H~09ٛH:~YG5E2$e[9~KP8"){H6gUUWѷpρńP?4K.l:z^uwoI ݫU\WLWW:zߞKXZSV;`pqVWcsg^7ο+hDZwhIegBp*%L巟>n˘q_O{j}a1q_ܠf_m\ p`NȲ'>:B~TFz!RҖx4b] pt1K:%)-AvVpT%Z1w#r0؋ O('GₓAL|WL>F#T;] | /ϭ˭u>"E>[q"&3} TOwNy?)&ELRQ(pRDXįR"_6H@*I"hnR4ɹ$h$*AvZ:xR0wB_)J3)r @ T(M12vBJ =r3b\wf1smܿ*VT4_EAʂ&JNw!$$JZ#p w&j \Z4+T%7UZyJw%Nb?O}SOcGKHo=A%12>Ǎ4ᣱ&A9"AdTS9omM5>+av +l^c%x(ìC9uǠbREʀpRI ~vAE"?l$Ǚy{&x2?s;bܘ[1➃vf,LXaφ6umoPW֋\ %`-*'R6 I$QJLQD="h%TIn<kRWs4I{ӑUlI4t3/ mmqwP@PO-WULEo2ߜVk\ciZsQ!ĒqFKQN+SR :=GzGsǭ{[i(pmY+5 њJx"mP$(#D '3F&l&'rS>RlJ3.> 'd6SV2QMֺMH/xzQ/Q׎Wn>MFsxֽdV_oa1`d#/KF_eo`n -9m6ߜ^030 b:z 5[Aʺӆ%v $AJ+E*RǠJj =nǏ =1'Y85 8O1LQ1h 4z!@,P| B3dfy뻈rm4DJR0Hp:=1XD\іqgk"}kzSZ^PHTr/pava!hkC]WMcM(H"AY,wA@y'L(H#*NThDaRk(c$rq 6 f(,q.AE`jD>jKS$Jk2 ]rD;o|x4Re2j0^'M0;=nZn J4̃v Ml|BA9gMKt8SD Nj5J5x Q$|DIqw#krb-E,FR.J"!M*-A,LzYDZgs8pwbH_ wlH{NUِk ٽgڻ %٭:p4:j-?!ĐgCG4Ϥt(*y=?<=U%3_zYI Ta;HdHDB$0!u8DBE@#B&kAX %Bf'ȹ$NB".%eX :$I&AUA$ 5Ƅļ˴҄Â֝!m.t4hADC8:JgMxL@(C>>èR}ާ@$)4Zxۼl}%z4!j>R]Sރo;@ =OMN:"h q4ar0.ah< ux#( Ձ "Tq-FJT=(^|lɑ+# 4Z2;Nj^20ͷŁq;?-GnS>܆_||Մ9)%>XQG2K%ijQԞp20$smItAKCb_>7,˺Gq꩏VIAH+9D!A 1>*/5L%a;-m2iLkS0]]5T82XPp@qw=v?|JPy76OEb%Z311S 0Z>$YHZGk- |.[Yh؄lVFN\.Ȥ/#pi$LwݍJ"J2!4B*cUcD)HX@`5ݟvIe.|ZL ׳,MYwOϔ=]|krڨR}1G2o,<͜t+ZK ;QZ]Z^? x_$ktjPȊ)(9Њ 'n2+߿9L^@ߖFp* w@lFrhwWsny҇LA0!_C!Y+;G)Fb"Uj^?@-n֏Je~:ONVywt11q%0Qz}6t1+URc*03͢3WKor%Q1~Vs$aySKF5$-]5546J,pF +3g1Af>ѓet$2 Zm+7-o0#a|=b6N -3{[f*UH* f]"9чo~|?߿}2?{rb]Yf`DB4 XyLP4)p qg:u&0H}hh& dk5_qecAKw1|і^E[U_Ҍ>>S^LtD4jJa!VJZf*i;ge{HCBQ$bQ;~bL!Q< hD% )BbD )C-@rR!$ՉKYJ{ZƈugKgƛ)S#J zGZE<R 'Ukp[{E؍ՔlrU;3_OU~%o)mɍ]i-G.UZ? vC9* .t5\\=N`@2ͳYU%K,YԺ}NWJH[ԼRr3= n-Ǽ i;bՕ?Pqtp5|WFAQ EU6Kױ3eɬ.z| fIYB_:@܈e8rkgI?_Ɵ/.ڐ L(KRT.X蕖!*iB\˰e> ٨p wx{J'rn=$AxuAI[ J\_Ip 6 I7b/z{|t@*d\#1o=Q"V9Hi)#*̘.{gCj" jrWėZDPO Nhi;u\8M"2v2*2xL $NkS2N0p5@sؖgk<4.`"B'hTw>g0o5!zz9+*' bdpZF%ޕ$Bev]RfmvOwb܍y#R"DQG^*emKd^_dDd:Aq}, ɍK"P W9FWܜ0 NEoˁl#،@mm6| .g/_ĺˉumE^kϣYr;4Wh>] 擲E4 i\㼉@T^/p FJ26!2yޒEΏ@ϧK/tfn=ٽv֢E{{|w¹3e?,4˻Kpɺ;Լ11q)r[#h'JH*-z*IEQF z$zԭ{-qPrpM.Vϊz< B:a6֮{]k:C&-#wrxVqr7+w=W;*5tk¹)qE(P6p,J%L)yU9&C`Vy:lʂ6ZeRvHK6g &͵JIզs?g\/< /FB^ވ]fGL/qqt4~4:<Ng8Ƕ Ã0FN1U1gyR:;Z[Jjd8VdIP?@@aBe2 Ice9bfھs?gtM]:PkCϵ{ JO⒔Wڸ$7 $FM2̇TY"haMR=zĘ1m$)B$ Y. B JԨh0p!DYc7=zH%^4v&^ ^;+=.~՝>MXoɹ|RGUZ!6iS)B>'w飴vàx)W˗Pa,g2;N1 -ik ЫA?gon;igDE K Cηh0ln}<ɰZm7}gӗ\~YS.0xY7{nd.r~3K<sNZpgP%Zy nWS>B ܔmwsꕷiJrX<ެ40uwwo--gT.X?5ie+ȼ=0{`>0oi=B;puAĠX[;DK7q씣I=H-6\&2|N<`z.Eve9vn]DZε"5'zX1SOaq|\⾧,֕pxKXCLhpSZ-}x:6hj"8 :6i3.z,^)9,jq/7XSWTvXIԐNzk[ks34n6 (ߔ5ovJ kDm2A tLpjvRh"Z~eh1# YC[. }=O)/Jh#b+1SxrYIV RITt0rH萗f$B;NW3 &jkc^is@ǘz#'N 6Wvj W,=*7&54/ÌD3ebzRb.?~kj6,Fr(:$31Arp te@0IpTZAVhL[ k\_8{:fN$J9"-C,PRtZAs q.'g :A;Ɖ[YR*YƐc$[ljP0DeUΆ8kqJP`iuʔHpLf)s$%V+(IIDQ;!2hC&>GQo: )B2 1AE:5zr>7Zq6^OZw/ IJtEX)&<`zVHGCF-'A ث˻j4~-\˻U7n<[13u`Aig2$iT1j^;)T Ыm,"4 6[^E"[O4#;0¡Z;qFVʩ eBtM5 N5 P=A/`~ԝْ4m/U.t@ivtHMD2*I_8JtZ5kV{9`e~?0MqTzj8482&֊_}{u;}e]~?dŃ%sll~}fxW$r"N+:I. 4HDǜI*ӢlOTh&ޅ-`jZ2zNN)5o{Jɝ7iƴ8]i7$zǓfaS+MGCTB%h-xss%ddNK8Nm6)I*,%@rN]H14,)}@?\&IC 9殻U01^3 G߽mzwd|yqSJߤe2H>mCb2RrCчtugIp8Ik'B#6HHʈfj}ʓjiKmzJ$8yQ]~aᜅ'TB@V>S(N:%ĘCʂ)y%! m(-Q' q<'ǒ^k:>\>76akrABH i.$$ZIeMR!52\2@gH" ~AG"bKqvI. "ib(4Y.֑, C@O@@M!H}*{(["my[#_nBL9"v,2 1DО~IqN%wD٘=? O/DZ:6Jh\r%J |TYG:_x` LBK<10=iXTJB&rεB#HJ#ş9&%THBS w(q:ρb >F鹏׃q6ԆCk-8?WߚvT;*^Ee׃?~{?8Do {Vi52ђ@3 A"[d}MK<bb тtaty0<=k-DU9}d G$xtWMoqP"'LO;ܷLmѾ7\G 2F:(Fhe_/Dq#jit|527ִh=7/&F5}/?^̎^ خ&s=x]5#`|6bֺ %[moH2Α{ӮasY0׉=6Oe)~6].7zz5t8[8U֏\dר]k˛>.1RVD>;gyP!?|ݩ5j? qx~LbGz_?~|?o?G:qB;0X ~n8%rt S4a AZ~M0?]EvkVke!>銏tT&!cSRg2Uo$6W;/>;έE;y8AU_=h:NG5`nHqJH\U+؉O{蓵Udmc;r.=-AI0T(]H"e69&UWF$ d1M:Yqz! )XJf)9ȉީK2bly>9o[s<\F}<,rrt:|i% #6ؽFѾ8fzq o3#}L߸$NmzOzs_UMx!x5n_Z7뻧OtkwZ͂Zkm90~TXYX\~Txl$=je)r˒<ƶlf=~U]UeM7W~] -4r=?on\o~v}陷]Pp!C7\8y:Z^\C g|{?Oh#7mi/{no7Yߩ櫏;T]rpWVc^i8.X4SMpTiohPb=~;-GYFɝLt#r%/T`@vS_Uխ> LZ6$YF tiAMLGfUݫ>>6?N,#naۡٗf9wӋ٘%֘/ޯWNIXYڔTΎ7uVZ$}t%7HVbKĠR۫4:r;$!;}Qzm6*GV{jlqŹs\9S\~b,Y*Ό=3޼l Pnjpy=~pd-)< _F+Ƿkl׶!dlrsr>{[Mm]g_%u%巐:s:-)r[JQN*4`٩܁EQFg z4:8Q3&el%}V|JE!!U[EJfce9[Az>nL(W2!e씲Z=eje>?紺K[+(H΍N좘}\]zȨkd,F)5fr*OF, ՃFL.rs>Y3G)ٳg ye]]W],qzW=WNn5ynɟ hd5L̆6rL:T&R9{ϓBE+B2K YUcK+34'dB66QCА ce1hlF٬uZr.EkW]Šv7]Uf'Pq !DEB@a&!HN5_{a9aw,X4b5U#QqЈ51mVT@2B jH 0Q5=m'$m=E ͓X;ZuDC"ف tM&_/.oї[B9>F`V jeC\7d*Q84.vx38=5=x\ݜR0ъ7˗HƗh" &\@n~}I>\.ުvjI9\]0s>n}˶vcڶ@gš\>c%]6 7G)7@.8?O9VbP$JRhII&pQԐbYY3'v{^Z+Z4bsG~JÝkydd9sų$G,XPP;z0;Ŕn"U&6l wݞJֶ+2 tkK;hcr+!Aֺwu:&Rw]jEjhF'Pfb&ą{&3K>]}tk{&KLVmHZ\Ƥ턷bJؤk^^ܲ杺PiX"j}튾|W|@P(j28Ox+N*F<3輷^#dC:gXtПVsi)cN:?.;_QnWTON0SxrYvQy RT69&S9$t˟ifR;Nx5cL+ :mQJtmp9QJoqu xm$'c٢4ODȖd_.3QL.^Ɔlv-]y457wط[nfӼrjk*Fqi}KLvX1.Et!pI@`^) F{PZ0yj&U%x3gDK V)GLr( T]x3Cf,{I6Xh812r a"p¢1I-0-Z;YYΪryv wmDStʆk rZT-$IFQ;)DS[a(L H`/ID/ Yf{V5W4I =E<B.hK($Ss V0YQ+$ H!QEq]5iNzoV{$OV̌GX!ygD$^$sHMs m""D-B/x6%"; M 섑5jMZv2X\)RN'RFRCx,Qm_TCvQY:UgKҴCi?mHyBlrd&zr־"eW4IBډ%x ڇI7A4!Eސ!P6 5WC7$씚!kޖby0 ,Ye:c`b 2&ؤy$/$T )y#R`< \+ȣ`%"B0C0k'KW#g3% b8qHH,1rn0`)mC,`bVXN`Y%n|pj;U:dLb2铤7<Ơ'RgHc >!<cr< CwT҇JpM0h(rosyz!wbIR\!(WmjZ J,Ű ťϝ'r+2a-S'>)A.T1"_IP9*P-VnWyRVJ|lvd&v*22\O?t~h=:P\?#@Xpt% }AV:$/凹@瞀=vt@>~jSGo0QY,}gz7pg<~7+gDcWWLg?G7w{|QӿZyYi61Emifc~ /%=1imH>37W(}!e:Ω'YRӢ7}}z{A7Wdz3^vVN%CKъhO4l?S,[ ]lqg+#_.Vr}g"&,uX^TcuJV;q!ڕZo]T8*Xxv6E>KkZ-hs25jA ZZj 9Z(\ #JcThFKݫL=Sэb֢kQ/u8?RR֖v[!t:H6omҤ&D1 pT Y3+EWd`nf :#M1E{) wLx%Dŭ4YV1+!`; If( oR!TR2玡Bi&xB. :ރ%vwhE`{ d Lx҈"G&t]]>n>"U-Id3<#KƜ 9X\|}1>5sMUEzr7NU toD $Y1ڢDwIGo`CطJ0e>,QڎMѵ(%|'ԴIE[RHqu~J!u€>.k#U>)BNw'餞.]FT,CD ɞ5))idw"R5V&Y)dW(! QS$,;vDiF*7[X3RTiƬ⤁1&bbC8I "&d w3l\.f%pՂG{{Y mzZ$7xsHu@Ky@xc#PYiV2Mt%CHhΑ0<9?n _fRcA39zjN(dUʫ k":02E7xѴy{KH'*@Pǣ@܎m}K V'ES U_~^j lbZ1pߩqcU0~v6H/&9 (T—%|̡cLBP=n;ݻ 5c,C)9iDκ$!;V@r0]Zx _s@[vRh ӵzNw8= ΃pPJ"4)Y c~XT:b$1`#Ƃy0) !.}p$,J؊NX)]bAn+itҞEw64IxFDj3+jVEC,¯X@hP& %l@M}wC-J - ڪmryXq=|kye9/jv9[muL%)Y`֣;V$FOB=V`&<;-,F ݚT$$QzhƤӳ 0csGzҠD+Y.T@=ʀ Bj0FQ-B>A ==ZiaKP>]Ow؊PD4cҀ'7(3`Q' 0 .LʴBѢa6 Ed$ӽC`G7L:lp?{mDVc- ̩m X{uAfaƑ@ZpIPAlRԞKњI20Kfj)څҵuZO;z*DES&ZxZ$ScUѫڮ qcX©:ltiP/zCLօj4S"qfd98QN=zMWPO,iJ ]\1F!*uk !⸱ކ`ݤ>jE#WnVӥ!0r(:ff5$]W!0xSt\- Pk|uVW4<<2B.8!=fm~۽y=EUhr@[@+bChmWׯwO3`p9Ħ*F3kE4q7NyĐU5?ȐvcRy:s7AO~srZ708AA)N%mz W9ۏo :#J0hy4i5X4Yԡi3|i:*3q}5;[ee׼e y}Zϳ8I0vfby5+W7wh_i 5 Wgœo.rqnڄyiXl6cwo?'UXwMJ-9+_ >f{IO/bVbPBPɶHҧxQFUZu=uSc:&E7T@kQ'Gv}D eۛ!Ф)"DFe)Tc<"F h Ԓsd4['+K=]hv!'``ec9c>^Kc#_/=8}'N;ϕHmE đܷۡ9ҭB?haI@0Okf~\|=Ngip`l/[k^$$*GGЦ4|A暚kCzQE.0VFLN~fkշI]t^t~}jǐ:q# 矸^;}7ȗڥO%w:wX!U;vfSf?XE_ H%SUYk/>i0ʁd|lk?Dsz\v{. u-v̟.N9]Ui>γ }?ǓNm|S<¢푢-*V=rS?T͠棳k\=cXwvlXzUrLv&n&xILlJʚɶbxSUD:񿟂wkNkgs)h\6=ñ_,fם{zO`zGܸ!?9x͗8>oEks>dɬ^X޶Ny{˝m#nfbˢu 6WQ錬 (SQr2=za:m+PcBDDWO+SH8aLfñ2A(Rt|摒UmQ BEnjjIPQeHZ U*79:,+W+h9!* Q7tڙNUcDVKv?;{E'g|N7ѩ+)9(.[Y|h~X[3w?lt%`KO{vqw6Ͼx;$w;w^}Kw5{[5mA{*c5*w1aG> kf1abbfW:M Az>}a!NeU`B{:g!cr it!]w޽x Y^c =g{yfri,'-]}{s&T阮+ղBpõ>sE rV8=cAJK܌D@ pY>t(=XMIȾtjh_mӑFFOAF; w^#wy|U6pz;[ongNwvAUln굛-wem-u4vWיdnU%3ҰSopDJ>[׍^Lݙ7̝p.pɇNAje|^[Ӟ{7Ԯwt"c:VMJA};W?y7~^ _+s֠`6qnO˸q>ЃYG#0*>{9}-'mZ:hWöCZېz>y6uHӝiPכB] :Kz X#iIYBݦ-IJQdB霹Yr&k%Fa:X!ŽLZpHc1\Omfyެo29%>iYa!ݍogiGg=D8t7xeWR RiZ2<A۸5r0TUt֊Fz4,AGH@/jeeM^[1(-$TAR[zFr"VIa`s`V$bޠMUM^B)ԵyL,+UR8e@I.,6f`a"CeW6yݗB'cQ%m]x3&ʼaš6.i/^8pBtAÑ 9-E׆>+/|CNRW˴m$zќЩԁ.Rs$[|2N`d{K na11jQB@[A\#'i0S8hJV|fdOAPA$G%[~]ӃqDvYpXƗ/ d廪dc9reL̙#K.Yj|B!gc,gm1Th&t[yHOWӺW'&;;aRZ ]1~s,FpwW0=+q_=nF'fiiRWzgSNrΊvTC`Fo< yID+TA{Vmǘ~2/PUQ7sc;LR 2΀ LX\0ԫ0SJWpײj Q<$0d%JIb4lH:(]?ct-<9BO)(4LreN)Yݑ'F@-T#nIL8)0¶`vO4љug;eM_RASICY H"lLrΉBW5ƙ]Ω$Gu%ȍ&ZB.#mߊR+Lk12@+NMJ1G@SΊE⊋?(сł|RV> ozK,uRs|F x]e6b-:/ü_{~M~^Z  ZiƝa AX{b#<bGg ήg1V<1^ 3floI%QU.p"3OnF֏iv? a[җ|~k/ο޵oHۢ}mݻET#sL gqV4K+>_s₎}ZIb󱷲iޤy_[S?&O~n~a 1OAdo[풑7r{Fn| o&di$7$.7 QmfX~%M8j`r_ f=^ٿlYj:[Q7UX 7.ב82"(oػʽ"6]M+=N' :DtNbO$>/?To?zӯ8Oç_~O'Οhn0MSSǓ&&`}CZCxvV=}u5a5#d*8?r[c?_IW%.Ss^}Ml~9{qE;ޛdbiV2"r3ˑS&OjCzS +(@$!za2‹b&Dd$0IlCm/nmɼehqx'|8.M)ڐ$ @4f ف^S4 Gg.u\1j4bh`{R FBtXfs㌆F:S7EvDZ.`Y|VN'g:pǞԏjǥ$YJd.r>w*h@Wƈզ3ͧK"ÛѷA{@uTa Ƴ!|f]7F eS&ӈYBq쎮]_]ߴ}tlInڸ$/mAos u4 ۜZ7Ժ]iLZos8~ؾmԲ͔Zv>yxw`=z^jFWW[=n[yWwt<^N<잯Ҫ*9[ z6/%/nS/Amԧҡh˚[I7IV]7'nNS8b4:{ʕ葅,}#r}x޺o.~FxrYIV Ѥ4!C^ 0#JAW-&1RrI#5]R")68R97A 6TK*]Ԛn5S(Un|:=uqcWCzP{^1Arp 'R&$^* +4Ad &w[sB5qpg@I!R+D,*2DŃ2\uOHϲtv!#Xl ,Ec:Fbu+X =ϪMgO>;=[-JO SFXKkd26K=ZtD H8'FІˠ]5Y)ѿ|Orԁ0lt d/!S cϔԪ͠ӓ:=K۩:zoAJtEXE(&<`gAGNZpO@t8)Gj4FY7CeWDoxԁ ɐQr"@^qΝ*ms(ճJD2"c7{dF8PkRԲzr60ڣJ9o:w1cs?Sjr(w@ez;MYlv&MdCZ QNf ɑM%@[$ )\6(>ϟkXR Fr MH7t(sw5ɱ ׌ c 8<͋Dp#c" 5^$C<F F ˘!oMG KV.IYoJK1nq7%wK2ɩe˼XYOJ<<4=CK/舙b,w|q=li4W5k$TMvX.TcR'<0袗F0fdR6JJn]O@@Wk)l`6$Yʣt4x @nӑY3sg< >6ף? |B<2O:ϯGI<\v؆gq\Ca15bf).%ѭ1_|:%4*)po 4$|tKKP2*+O*@=*JRe/âS[r }76'ʑվ6g>.yӝ٫.v[$zbY4#iVKhhWS4LR4֝|Pb+ N&9O6dWeG_ʅ2RB5k7YPdwf8KԾ(ywJi^#naH0kI'-<,}\1G8G͊풐sA>uO IRD $pos@$w"`u(#7HM%3J,)rqVϊz< 7jR0X&Wm:{+GG(g<jtr*z)%/iP1 K=%Si"\pʡm8,J% i~r*OMYp уFL.)q,s%}&!U[2Vnɸ'e* yeY;Yp%mVx=ygqHIWwKl0L6VTƟF+,\追:õBVF`VjeRgN2R(ul o2oG7Si4d}vE1vjrќF2 FZcwy{z-»oI^5>߼E7^7m_pgMOŹ3s.QaVlYӛ7G7@.8?O9VdPJRhiI&pQЛ27y tͻ +[6/[Zb3Mz OӚ}KӦǣ=nf6ly8|Y7 ]WXM)^X)CM''5Cz٫ƴSMB)6l f6Nq7{o1Z_{b?bfiiS':fJr"̧%'$O'+7ߝyt1,Z7ܳHԻEϚ<7$ -oFmv;cJΆzqsU5UB3Wo\\2WEnƜ*:v檗ѪEs2WE`N\S1W$bUfޠBە%Z+B*|(w}vtq>a%Vfox5s%ϙ'_wf4Qo];KN -5pwm:%3]dtw&i8z'H)|f ܝ*CW hc7WE{ ">%"=sE \)ECWo\N.?R\ePav`].:?~+FJ39)GRQɆngCr4j'r`4(Hfs`շh ??Ng^yF6|י~-@ُgw3ƒʹF pUl֍غ[7bFl݈u#nd/غ[7bFl݈u#n֍غ[7ۈu#n"6bFl݈univqvXaѱ=T6FCh]94h y!o4䍆)7:FChț4RFChYO!o4M1b6FCޔѐ7FChțk4䍆ѐ7FCh y!o(ѐ7FCh y!o4䍆ѐT{eqATO'KE S7ܴb4:e猉D,d9iK:pI$8%zeYT6l f*O.+.*oAJ14Ӥs*y+`6B΢Ua&3 cL+ :mQJtmp9QހHNVڕEEЛ.mKB5{>F@FDV4\e|M0z;S6|pKkƷk8G9~]q6,cf%(ϗߨLmdE!c> yQTמT 2n M V'RO\sFD9 oARx<P:R4˃%J)L-Q9f8eҳ%B;i #P&2',CT"u@ɼ8[YW__-O ԒC uʔri0)Hf'S IG#yaNJm8 TO4nɃ0e|Y@$>x)xL"ze:5t3|FodZՠV6?IK=E<B.hK$Ss V0YQ+!4B4Qp]1>~ςV<ɓ3Q@AF#{KιnjV/"BN+"QU>X~l #j%Ԛd޹3Ri N"p[z,QQU!j-Aoeb~%5&4el9':||tmJ$J"!%N∁,iX d%ĜoH[̿桂3™q[v=e|(zMPm@S^o?WCXg~9X|3{NOۉ/o4(>,߿~@qe|;Yz="<OO${}no.Ǔw'Ok]/--Wu4Y4P mhk[>ܣc[Q28ܑ̀VBN`@^/385UV}G+?o?n)LdY9z>ۛymwƍ`XtJJ,TΎ7uVZ$}tm P)W9Fǹvs_ Ev *Bq]*K̃*yxP)Fz`̵ik Ho_20\yY+vSMBI{KXc_`>[]WWV%Aan~wkW*Wz|).:!|@qYvgiDQ}:i\c,'Dk⑺30KVA$…%fS*~yQ`{+U($$$b :x0R!Ql`2gkɘҕ DB="% "oYRN(3Zsr,A^r[rD"ה QJmɎ ^@œIC-")W4L4&=|eҁnaB!E}1bb:">rn0`)ůmC,`bVXN`Y%n|pj;U?0Ql1阒e"PJ$}1M.uc-4N|67;"`, Bto&uhp up,3݁XN\!(W;Z JPE[~$޹N_}_f)ɓIk GK I~Nw~bt%f8gct|wyB-fU 6P<Dn&+.Ex hR猂.Z+B'ů, /"h mDuY =ME<<ٚAA*ɴq⺺)m! N:ez$Ct< ]At,Ϡ# JZhIݺHznq_6)aEBHmx 8 V((lʜTu6]V R־ғ/Mid^o((&lJ}‰`$ؠϡ=!""S'CD@Z.NH+H A"N{!=}|{a&ҍ>=2[_8U6~T|^~2&\j A+9cd |c@e (/s0My޳q$Wo3d89DNX\r]!98ɤE@NOuuuuuUuuU+46+rhj6?s儹8Z1_#z9E6g]|m)(Vzp$iL̗STrWJ&EǶA3]cnsc ) AK"'/-%"K}ʧ|ZkcfZn`W:<-^ b!JLl5V:' )dz~,To)Po`yGDJN3ƕyBaҌ;dL;qH0[Ȃw=y%'c›bmk`vU@̒,H)  ְ%zB&Z#߿'$h Y{5S1N3`ɔ%$ZDžB_ h*(ǂ.Xky2P1.1[XՅqB%As\ҍ{_.*kܧՈ:$?YGY]~:.woM.X~y'49I޿=K.L~ IϊKg tZ{ D;On51F"G)(,,1iug:8脾NgJ`pU! HGM!jt%Kt5Э.ʴep&9_+u?dlr!ȧ[5 ɲVS diDc-k ÿ;KC?`#L.jTa<1uR4.F~+]?VV dKo{/ RmI GwW)߂0|y]OJuݐn8FT Q.q sBO}ٚqk{%h}AZ7VYN4|Ď9O!eI]ɨ8c]*'>zqzK`@]>}us&Ӌ7/a(0 .^CAQm'4;]|]˶]S6wrdm$x~uCwaK6ˍٞ[lTlύGLYt=a.|k=P,ycUDJ(X$HGf0l;E $T鎇:Lj#^95h@ w{紶P}vm rmTHYzQ^u_:Kg [I?N ɐZ 2\9ixAb,1Y[8xoVBl$"vh!L1v+P99]LM!x0k5f DJ24dhj4BZ"Z[CS}%mlr;Aja/כNt>k577r Aar܅g9L[DzHEm:rBc#Oj4ˢWZ:l׃r.lBZ%6w /o^^&or=?LF5O{cna)]7aPDdUo<\ʨ8*9iShݞ~ֲc^)2]k,R:D jmc*0:T/a&KDCIzSD,1SLBed&!7oB']):~7~Ɲӳ_^M`$d4HjqͤGd:?h{TƄd+y0%`$kĖ}KƏ)JV 9@* !E@ XDQ~F929vim*hxjCX3((dQ`5|a*$)Lq禽<{w40iCwOT&& %O5(XI_^4nn.4v;GQMzQ1WQx+kIrY0CI>?M/F^1{_ʿ"5}I^d鼇0ucsRK~ϓ jɂR/g_0K@ N $>cɦA/cQ.+b{YF. [|Ѡ:fp8B$#O0z[/| jc[ὐMkIVPA5+dHQVTY}AnnS\Mzʗް-j ,YGD{jy3OiRg{ZkF[yF|6+[K]>-,=:L|al&^aUtVYPq$G[C3pmRg/]=}m i nYK1ڠcSrh_o|+䮥L{ Kyu] Fsp~T$la:7t]Nɇ 57Lb[P7 ;GX~|1{"Rg(emgik2@ȴՋ/5qkі+.> ͝ N+b+ͽʼ߬No/z )Z moO =ћt wFru"m!\<Ɲ[$9hS Jp]kҠ=+xz U"hoѻaɆP Lz(1O[1OKqyz1OEuUDW9PUV}ӈeezWOG\ ~Z`TMA8TrZ;IN+w{Iv c 'C2͊هYSLB"t1/Dm?n&P$h[*]LG,ғӚbWW^HϝO{25CoG_tgEleUjxGxGEþ`Z?yq|q=f0: UX"Y L?Ed7÷2|aWeW|/^4s2\>['Q.,8=Nr]r#ֻpƇt78"$o^"s&b+پ[4Ku?EF!VYrx 3*) (9}ݝjVcIJr(TP՜ja<oGq}9a>+ 5Ӣ}>Y`]"Ix)K 2{@0 Vl#'D&\P2d1RL,&ZaL.$]@ɂͤ[I3 )h% 9#7H8?K/ 0Yų-Z>j]0UEt)?*bq2K.[Փܪ{@VurW~> 4dpk +N` SN.U)l &{Nbdnp(ЈLB`Sfg l((ǔ8 -_[0nU[TXéq2Ij) *C;+/p# [#uVG^ȿiWov\ sWIU!ΥbQ]W0=꫹<ejlU#%:K},uHC,Br" QQb "uJݷb{T[,CT|kP_[PA[ϼb+Xy2DFB2>r%"#&S^_4IC5㏋>쏸f\,ɂ2V4g ,a9X1J<LJG*"ܾOu'QGHg0 2q8 K (!(-b<.v65v0B`@5H!;2 !:fu]y*|}ָOѫyu:6.G#"mvNtX5+e3K|^Jt` IEJi°R3z$T鎇:Lj#.7nx↣5v3P?z;-7]vDpy\["֠czձ`W) {)S2Fo5S諛myA4B!K$e:i|a\ćIweֽ#(i0"ʀW<(ےb%.q%[0hvJFSզm4t"A0SF!3y.?I (cȆ:oYጵ3"%Ckayl2k45[!--kr+dޘi c8\.ðZй'ש{7՗8Fzy`.&_7.wGYVKYc$ע6O!Wӱjzj> h?E u%C Sh?N9 @m6Dzje -Ӓ[]G;7/5+-C^otq9&֝eutw~ t]2|~!u=}u+n; 1^8c?ͮ?xFz#ms^?U=ʂjֈ8Uȫ}ĔjYlU( (+lor1s JsD5 Cͨ:Zab?EZt>4qsc7bSV*#Bq`E T1 %@譢4zhtC r6]٩?#Td^/P_rI3Ps`]&UERPl"AZX|K.EsigTWO=Y!UHzlIU57)ҌscWH2*btm4.IxUU=I-ĄX9l=6GbKOIE 15j 8Qi,-̋ Tsxo8XC T*;rH]%g}iXW4N P Fb'-vҽ%s5aAA4oVKrFǑ)X1.YD2:U0 a)mD&j3b1sU C ŵ*Yrn3P>Amѹ9޷WsQa"rmG)(գe}?KD5Z%QέH1Jvĕ[G7d&l1LB7X;.ű;LLL!7a eM$z6 a 6l_.nx >_HײtdD2HRz He ,p.!V<ڰ$L`Q Wy^K?/jqEZ&Elkk1Jb|.MfXJ74>I+C(QSn`f JWU4vtS[EBU2q'Lz5.\<"T^B1߈b4SK&"{7_N8 #)[g&'5h4lR^璋"4 M(6+Tɕ ܫ:M `\@6<~:pJt .Bh TXyكf]FG`5<~#s/ _?gC>SW_/Oo&mova8>%V՗1_GQ2Ck K1 .^zPʫz1mh%(dF*%ʭRu  =RjF a ~g}=mמ͹w_K&L؇mvbYe6t ntCCc6[to>in^cƥ}ѯ>pd;Et]_Kz\7ӽZ.;gr?pe+? U͸ZΗgnߏ6!=֥RjE͐DjI5]҇zL̮wѲJzv Z7rVzf'XMbæ˵-O!nx7|/Z snJ>T0JT56cЛZ m 0Z -]XLE'.UT(ݳ=rTFG a<\e)Kƹ 08`D ".Okf >j&k j6I^ESNH\5#m,TF9w)\jA9J3st#/(.L&%" EXpq}oCam5Hl\fgs BЊq}K\pta0 l`w 틊[(z;[Ɨ95Q{u&^a"gM`&ґw4* \yѧUդˋ3iτ 61 + ULkη`͌ő8f.l// ^FydLV?N~3`j;nN?N9֛>;~?8x~VR/ڋ6Ag- rboȦ:Y ""tY[ת~'``s19S!IbP&5Vjj*ٖF d5lкtZ\a{ѻrwbW:f߰>1܌ @?ywN/hkrq/#_^/sSwiȓ{r I@/U'4hbŧƩ ab |C+ A˫/<'13Y!@d=` 槠?2& sl΅b]hRr Aשu?]Gh1o^])xJ"8P+ZQ٢ *|i]=2O-8۳(eK-vcСI=N 0A|iw zK!WƽbZ!xo]qu:zݎ@,Pb2GMYpNNiSnwWwнXͿxv~zz=f內Y oGf5u5y; z}B/]7rxnW0D4Nk"Ą^57yFi %k),ƒmHMzkh{P[BΡب69TF75Gdѧ0'\fcڛ,i59ǟmNzu^tpq\#;S:9G"CtݮL 'r5ٍ'Jz{_&ʑM=Wff{knO uLQq,+=zm_JbY վr|yR۷3^c|Og} O?+W"cOC^~cd׫~7}ŏj}'>,vO {艫t{vvRos`[;vaZoĴ<cZ3ٲM+~hS{~J !O'Nk$䩄B_Jź~wO{Z$j%PBDLWZLV| ,ӭsll/>S5k*qfvz^^nwu=Su4лcM/T^S5*Ο.~^JY iXPxr aiQ-U ti {d_1"9yd#2~5P&sVvPAMEA#ޑ6b*Vlaj6If)k'+ao[1c*ql^u"u?w(>q/wMWh@?sw 5Mbkm+GEȗsc#@ML;htf1_aQ5%$;^W,ɏk[vn8.EydUW-;ːou0Y%4_)M:.}/iZ31>!MgXJ<Fɔ&./?~0ix\iW޳IyX'y^X sγv"9-3{nyI֋U#1F -fU⸾{1Ӌ]8xkavz7yQ3h/`Ed:-70Epi=~<klr2=x:9e\t?_3YYG4̪NOG]٢L16DnE$ y{ul,tx2d rO:H@4ٺirM) /v"eR[d:x^mք M=a'tTM Qg4IE@݋@!yQWzQzֽ^GJy.#8Ȅ2.Rj*((*Y)˝z:$eQk\ bjjfIFs@5VuB .m2*AxP2͋`O}P ;.?p O`w.Q}8.blKNx2d?kə-iufP\VOQd<:[ ps(qm 6VU9/oVh5Gj4ժOf4A=V&u6TjX'gQpx,J|`T>`*0l4uCdď4q>nEX*)5Q6$&}Q{#G9xOYO3d"xA` vc)*B>fƌ(aؒ2vkZ"6 gD9jbN!F/fH)"goT%Pb*xF0sKWw)ntZY\=x2[l+ lzj7 W5_^XE͗Jnևd͛ }3۾^|uK9=q"]}s\uU7k;SsS^0n/~fCֻɕ茔GcFB&墨_ϝ@&Ć"j54^eAJTF#ߧݮ],sҽ}^DS6΋Sg(ex|TY*ҊmtȔC%BCʌ~%*\gMu?5*]-֬֋_b>hG!4/RT\|iD~ZЈKjTN)\귇vu{Ha_*x -D>mכ@ƶGPD0nW&hh*J)Ukk0E74Pp$B(wL?;CRs<~|ʽw<.mg.v;M#tSrN懋2\OnlӅ69S5zŌxarZ2 جB^EX䊺} @nCX'/D7\dc .-.+ xw&|#zuWVΟoNlڜ٦XV+zy,KH C^4an/^ΚStӣ.rdthDljNF27(EÝ3(f' J"z`Reж8 $DG\2H+,$џ7%nj%<#ٗ-r5'j\T# i֐"ZzRt&'hjgktjix+>@AY)@tIbAaDZR3qOT:Ǵ^_ݥ3ݝ闕ݶKTE ^m skrΨWpJhR#F-T F)2gK%F0"rQz(#]]):XFD Tؙ8ۑ;vB1  \ETf&#-Ofikr/ 4a4| gWee XhQ˜PEYbNdFSNkc,Vc떇{!P1 dt4:[`(l hOڛd]ׇg;b8_21w;ӎ6tУv`75P}!tR`ble ֐ɚLТ2Ŀt ll!33褈D$ GAiE-IlT ] w&vvyڥLT,Ib+2ID*9B{ T*kKEd|=g ONFrRP!u3qC%wK;Y;=1O;9\[ \b{Tՠ6tj|?fu4aʑcE# hI]U,E6Z,UT1yAt~w}ڐǠ5: x'6> 4!5i̚990JBI]#O'SzHz&{fxb!GEIDP.`$2kpʣkcyi QC-fgk&UjfޖtA*&ԝHQז0npBvė:,vWl1VbbfR"XtPw8+DYL*BXO=?{WGd{"o||C"wp(EYYYHS7KFQҌԀm=ݬO7^;$#OBiVV?t+ q1sلy5aGwQ{+];6V>IuJ9/cnλƖ8_^#guG͹Wӗ ^#͈k՝[Ot['|W "dE5,$1Pl-%XM0' Fyb8kvZaH1#k)kR*#K @50;b:VaIv.F`'Ü/.?JXq-ƒǪ#(ZK}Ү*MijofmY껼ƿN鬴'ŒCox`U~`4 y*n/f Ox<u̓ۘYbaSF,kg-Ŕ}͜M,x޼[2aU#>?`bZ CW^p~rɛúImD&6uܴј,j/ jw,\\K}'''˕+4EӉZ-nV88:mL4~\MVtsGc4{%\NiTh}hi9{WNВyLRE9m:}Lamֽhq.CW l !EnAT yj{f-/d! +ӛ/}7CO s En|6 Ymp9E Iu66fvi^,ۿM/e}`RT5bGF+֗26uVW>v܃b^PAT22HMG xU)9V9-$\+c6}gHRE26&tEu`2.gq*sh:Y7sv_4>/T?pCEMGˌGB19rv tχ4;3Bp(.5$J2 TX4iG\t& &UjNiLFLތTgc4I=QXbL1i qQѤβL$Dk=G8LncbLĞ{$WH\dTBDg zt\hfOr/e:f-9&;z[rCK}}8jyȞ·p]{ԨNj5 -7;謍6E]*2%Wǥ!cmw׸z9okmaqȦVK-D˒['$qϓ@dC)tr.4ׅiCA _ {dɆƘ4 E7D*uhTKֈ~~Ki ]oHO޼}\c.  +!!*%nױ8)s _XS i+󶱭kD mrn>?ɷL<<+ Olf|]!f綅H2.xvdZĴ Xƴa3^|۴M5I՝t`]2K00`08$5mS0sY׋b]'\}.DI6B;^1 ,VSF` 2r>\hvsջs9ol>[lkޭnUj ?Rgzz ?9_̾],ZƪPjv& #t%Pa41;yCJ9&\&Fݷ$Vұ@YĐ!9fhh6PL mS b+hL-˜]t*|@vϴKdWF֕wҚ.iO:u鐻:Y٫$fWWJu7./|e{V@Ҁ\/ L} I90z=my} ֮65"J\ m7F$d ;lYq_j,$xVx\0h|L!P0,dIA]rЙwv3~ޙ.27 g!JlZ,:@6FbTTQNk4.蕸Xi (e]N.ec1֪Pǘ1Q`ˮ +l݊ NeΎS}K5^$M^FAԶxWNSWQQ}AøVX+{eub{fM*C)99\+p2qd LZ cc3έs^}V[c1h\|@*GT\Cۛ w0 ыy dBC=~Hw{-\> _Y7 r**kZb8j]ubTm%^`ـ]W @zps;ߚD;n]ml[Wd|/$ͰZurѳ_MlѨI4s:5f;x5f$^-J FGWM ̱XJʌp WdP« PƺNPVŻOWI.PXDwּoe?0M/By;hw?N#|D<"%:?nz},0ݬb_#L\5?j>t kևWJ5UN#$P;X1w,p% :\QkvۊV&2q1;=]Atn/ [fL_ɿ,N6'_m]bu ?+{BzڵŇN[`{ p(\>Ip(\> pmp(\> p(\> 6 wa&Fpy9/}nm BQXp}@@=a\q`NTҘXsJǜp 9<.%)r/ߧPd(JmiDb9E0(W:D(T1B2Uw̹ Btj w4-;`[˨H2 "]'#|&j'^#2 oPol/çC;:5Dj@SCIrC#| tS8 ?=g)O2LM]iԜ++t 6HD_mQ+Ԩ:&dmEB*L jWAG 5S0 (U *vٱ,Q>ͦ}={O~~^e/. tyayj{jv{ߍRt0x[lQgWӾ1ye%t|j4zW'-)$#`6SXĻ5d*xΐ8Uͪndzn{gwxw%XDwcdL Qxy&. T tXVχ6>v(`)|IF01Jw%Ahl"8B_CFNMRqf ^qЖ UW:ꙉ٪ )%!T2$tUE`j Evh7sv (Cn<gz[>$d! *(cul ق10\[o$mt}ՁPy. XԖAW ¤3d$N⵱-{rou^Sw2[ <;LׂUb;b>NeJy(mtں3jG{׈OkXYFzHQ Qqv(ي&[ !T +Vi4s!vQ7[Uͬ7?ӕa"TOx6_>`}2K{1G+|G-ۥvb8*J "Ni]Ԕ&Kn:k,Cys4 azy]C}B `PKBw 7vyMrZu{K$У siw5ϳK@7{/u좃ŋZj;Ӌp2GA?fq(#2=j{:Zwx#n].KRǣ$ǽ9˛>oD7AW3~l@f6YB@J*+ *et4n]P=;kU~f0N3.ʑhrQyCT*jA馠+"1lg0쁴RVQy 1MgdmWqk;\LJѠ.ёD#ӕPO4R C_6yޜs4+現$er:fX!^"TYjШ]Syn$cڴvB["TޚTh A9Z#玱B=Z)ҿE^h͖r"#-+$JiQᆕB`:_PP-DhB]T\p$dA[w9$-Fŀ2]b9l !d ܆_T>x]ՌQM.ۍiiY6\%E DStV0r Hm > 1ܭ=ZccLC~Xk\ mJ*"PEr%']z[& C:z-#k?'Ȅ LK.1 GKnUb^QZj%!H8OLEpyd<,#R;T` I x%VksGi9jGR;1s"aѼfؓ/g5Fx \qsюБ*p[CG'Cbb%@};}HHhT*n+)eT0䪸 d$cQܚw{gmzHP{D6ǵ̴{ КRK5oW.mS/S]yЮ׾m0iikE@ x9 eTd!'Ò Vƌ1GRT]olK ȑG:zyRCTPpDk0^ `dnl[%z-_q|a"[/׃0(I+g0q.gSjcο>!s:UUo(rx.Z=P}Bge!TTRt|>hñ׃'- sPM6#;e-6€Ǖ i+:M% ,V1DQL$gA: ,Ȩ=AdԚo Z8VҸRF81+Li(\ Rj,:hB/eK^ &lJy;R1ZX&sY{[#gzv${_e Pk; }ώ?֠ϵrxvRW+njYE)Be8^C݆+l `\@PNM|,5%]UΰLJ$u . 8.v^BXxHiSs-gO d&WtŖbȵ)g rh9[d~6SDa2,9z{!)棩u6yDZljkF[mH1)tzMY^h%^9lUIW`%edvT;J~_Am_[>&w=V*0  tX۟ZpV62R9g}͗|O_ xhkn^^%<7K8ER qp(cJ똋`1ɤP9Wf>dOz$a kIkfJ'lILoF,)-dm~gyxCi/ֻ?ӞkLEZ+d4,$IN i9Ĕ A0ւ^v߿2Q0g>k` T CH>$M%mK!290aBzH c鵦Dι c: tJMXSœ"H;n/a,cDCTW'!޿g\;(GF)yH.59l4кbWYtpTb+K7Gw@ojxe !øu2t{?H2wC,I Pܑ"P SLp%I +`2%xv_-~;0(⪜|!11 %fN\yyA8I>.@3~yHݢ~mo=: {_H3t09HwjE].J|%/6/h8W0M(wmLֽYbb?kSݯ;]LOޭ[Ĝ?<蟜V]1Orr8|Kȼ'g{RΚhYehZ> Y,h8\LYfp^[bM6U:'^ޤqFJǚ篣q*`o{El=bC ?fg{~J'G?w wx8b*M$8h~}=oۻѵmkoѵtjs~VQ'.6-G]Yr˭{?]OfuލKtL}SvY; ړ{ MlEٟ5UAbrtqc9,H3>2pd5ܮ_Ub~ ZP<KzaғCVu&88$6pO\34;73xٝZh1G;/ՠi7K)׾ŀn~U'!3\ykD)Mz%jT>#t2פ XalNňA3="Deb\gzH_!eSRfmz=L1/cyDZSV7x(K")KdVdUDqJ,%g29qw; [ƈ3ǔԮv4X~ݮoNdo]aeR7d _UحNz>fL=߅wzt= k>Hڤ/Aou S)i! =Z⇗[Zst(>b2J<\\TpU!J^o(սU/yvOWQ% m7|RCL4'+2)) U;Y+lY_՘<{i hO2>9Jy,(\t ,%gaJ*' 8 ZFg`DMbrokl]^^xs5[ģ\bi)`9j7dT TP^r4G@Dr T?E="@M"EɩM*=lhى9 {]_GmIu"S3&W*o(q?N* ^:~]k[dcgȘFv "}!}Qs$6&_~&hs, /K\W]yjWzӔ,o8Mi\1ю.<{6=Z wq^:<}6T-gN|WO^^COoƦ]Ѕ'UmFqeV%gm Zhom` i=%F|`;P}:yΒNky$+I9Q2Z[ WI8LH23W2dc\+5"ks+YN=wjr<@f^ݺ~,n֡;|k9mAZVgD_ X\2QJd卶7hxdc~=8%*YWsW$5dٔZfI'\,jogLKr`IG &%nm$<$ܢә*yR%Zv &-r箭ùL/ekflfJqLl^o_\Ti}GWW[}o]1^=^u'02=PķѰ ]i5 ʈ(!I ڵٸY蘒4Y$ );A2R ʃ,pض9m6҉R564,~Ǥjw̷}֦R9i~k^?T[ve7#G9sd E1KO(5l-ƀQ^`:mEx+)0_M ~0ۯj)rBΤ;@m%gӽXiǖzuLbWHS',X̐B@:qk?g宗y'J&0ac Qb^%SjUoѧr۽6GU#/Fn<0i@&O L'H RSF,Y d"\!IZuFmR2Lň9*rVoR?:WGwٯ J\;-8/59ot\L,n𮜆ۼK?L&?co*+g- 4 {KлcHפ3n()q욝}2Z`3?UܜMpſV3*Ga\Ark^nyݏp8[ sa> ӼvT7k_+zDھ[^ݻUL#sLO(bK+\B%m8Qem ͇ޭ5GfË~]u]u`1$vI̕?̃ٳnlלDhiqA|~GȼNV7Itth66pT'z)]|4,duUaqƻ|"ڴVa-^ޤq!#Ɗ/q}{+j㜟hv?Oj`äxdr//D?c_~?r/OKBT M?"=w^[nM[ZX֤\r_l?.Zɬ$'?ikܱўK$WSTU5UU f#>S=Orא?}Fh4C~ NHV !@((nJOO C(_a O ?6ކΫj=/P,SҔ&L 9@t("a`.H!;E3рpd|PC N\ɾ)u4 lg7XwZ;1Ss:;zWuK_~te@gT&FAtXfs匆J:SWEwDҪZ.YbW<&I9',Q?O[zd'JEv *.$Ke0MIɂ ,&+A;$!ERJ,%g29qw; [ƈ3TKMMiއ*jj'hxk 跮02岡 WU!vĶϷ-Swݧ]O9Zt~mg"76)Kk[]6O]fl'gD`4j6z8-.lOFV4snzf^ ><|Fnha޻՚w0\eDϾej|=VQdCwu6Ƽdڍ'olϵC-6?m8adh6'iNESKyDYht. d+/>7# YCGᰯ}^]r䲒E*%4Ti9CB0&aFڕ `H1&Kt)dsȬ%NZu>h騥-rEAhԝ~G=dN8l?|ʦ윱$硈 \{P+eR@M BD`r'InEf\=: 6sb Ut( T]x3Cfҳȭ,[ Kp`DŽDJfE4V뀂G{Yk(gQ#;tXb:eJ&c9{ ݓIG#yaNm u"Ц|=&{\QKCa6@L&}xL=p9È>L9A֠V:;k $D\%Qi)s Ixt ith$ wmmHyƬbH8Osa} x I6~-X#SIAժfW} ]cTtnR%r+UQdMĚJ(?\rI([FlmvDDJ9ʃ^ȞG2:vGh/,AXQ^Gzr(&z̵`lu!r{-#utӐ};[MЏ庘01Ƈ䐁H9'h&W lp2}WIk 5)}`_!]Kس]/N䭣p$>"n_g.Z{@K|:R[Id: (‹,Hj fN;L`^m?2́? 0eQF"/W :'PUlt!P0wҝWz_IۘvDj89G.\zs[1?EwL؟bҩ,1\ۚL_x:?ɮ T ZDo¢qRL?Zs-rP\ *RR| dou±Z i 6EB!̤]"d*\Bխ*!W@Kpˊr!T;n<2Z^ϗѐs)+Foo{م {މc30;Ex}s/G[k?XqF9gt5&Z!X[FuHVNZ#6<] oE}oRp(nu ]-E4uI>[&n ~g;n3&k_s%* -Gʘ:iɾHqf\q)mq!yc)Zm @[b.cq)jT؆3.ǂݴc_<4쇇&q *=7G?GTڸя.uM;_cч+CĭY5.Z^" ؅V&\4wD~AjE6g4v&>]}~yY}Mm1i#F?sez{V3:\=~;dӗi־]~Oŧ-siqz[,3 a] &V jvUW *3)b\.O =[gF׆䨋M*2|h0޸КBJCjf?zFv ]gww (RoxCeP_;kbP 1c0F /qеxqv9OLJ"lDOHY0}uvq͇cw7 d&fEQnͨ#pb2&?x7WluvD#_tC86_8\u/GG=/嗛YJٶ͛|Sbʗ tR"$1)JPCcB,IiARbpZ< O\\^߹fͽox]3\R\՘cV%** PuaES+Y0T̔A 5 Ld*ƒ+M7^'//=VM8+G%x}3s=%A꧷2. ̿O\?IK2>)W+s|Gڳ+* \5qWMZU WoL dJUVIix7WX+\5'W".ɰ&vc& Wo(PvBpٻjz3i;m/&%[kUMi lwl7iIgvJ\Aoqrbr=<1ߨb898y{')Vw't{+}J?ǣzhSn+jAzg rSr|ʒ3h..Dh20;oͯC#r(y%dAb)B QdF#`tEWյV Є JGm)6rс89RK!F!T͛]3*\QoCFLGLI7ݳTxn<Ǖ}u sAB !'N\)):Xc8ig"ktWC-GYB,TLOL E1&:GʥΆM݆ZN --V];,՗,'SYxqruy J6?N^˛^rgJC=ՍjoSiwiYmعAhnEJegWCK.R򣲐g+25r&"rՀ=%LkQj`Kf` * qnXM3B茅0cvĸؘ񖢷7./JS'=`/?_9b;VB Z%h䍂*#PU2R!@6\=xGPwElĻih^Ȩtf"S(QŷSW$_sMd]g&nĎ96vFmQ{fOi|$}-.zM"%Dv(2{6TEgZ I~銇հ  t/sQK,Q0J^1ؒ$Nup7qvh8D"zFlEs0֚quJW[KZӢ9b _(I\P9ۊ.+$q1$FWقVL19*\.N $JuFnFϧEu\\t^g7-3.Ҍ3.>񐠴<.9$`L2V {ahK{5%P 0]pCXv싇3EF 1 0u 9GP~dmʗ=>'sz|Aj\A6w|;GϮO>^ fc m"oWkdo \5YWPsӫTK>KXQW*CAY Cd Ҙ:aD> /ϒxYBj^}?:t*]GLJJٻ6#qH}Ta`b<,[>}TˊiJ!e;߷DZ#RTl3͞_ԁXF&)`LvYKV A:gN1U4B%aP 'P F>Lԁ, |1C9&JNe,AeئY69$xN@h؉Vm;xTi杻v 2J$|nP{tBk+C:PC;Mh#]D= ny`]g/PU~Ot޻aaAR[HQqn k!Aqr'=tPi~|oFJ9PF$2ҋFɼfa#Loc+~-~7s:7Y)^y%ҏص9g]?v3n'ɲ; i{^jѢkS*cJw7RUnnoe֧?苛ofi\[tW7+Mg%OoD1?!sZ O\>ש͋ṿ&:p}= ]63`0lW5̹:oH+_DKϧ"smr}7;Bه'f+Ʈ>AF{0z^jTX-YԺk3rhr'񅵉>ܺ|r]W񞲻p>Kûfƣ/D5I*S}nW(ꦸh-)%ET>* ,Zhs$M d.N2Hn3=f<٥*"IFO@ڳ9KUI(T }mw Jм+dF&頶BP^9f#ΘHZY3r\l FaG*8ִ/V+TƪIGʬdJ"R 'DP!.A@VAV2e:KNF(>P(X=@fP+ݍ;i{ !8}euU li KN D4 IHZDqU$B38(|Ҿe$zAxf9DA) *rm^KΥI2GMikaھMoϧ+k+9P@>XU C_1ZO4(DQzX-ghe.3?ڲ64jYW;,䷓uΉ;%W>>y"b{z(SU~zC~Pb!pdGJI.E܀BA -YgeC{z,io;5 D!$Zc0Yg%3db.E"x!kY_~wo>,c5~Lt\ʐH䒰'jsOnveA43^?߄Lyx~_*\~0W"yPǦ/ЗwEk_ٻ6%Wd?N qR-qM I9o /d {{hjNC׸O\ƄYNx>ƛٲbv1͜+Oe5.vI'ed|r{y]OF$=]ujv,qFJ3g1~lNj>WɚI*k{ed}AZ7Vn [q>KdCg:YlTݨfkUTuN/Dqtw77>Sfsvq~@u$5 mE%[p>=g_Aq_UBq'L(U%U,EaAx.G+|UM|[/Gۄ#@:b#;=7y5* ÍJٓ"\/U~mކ+=;GmrXnjE < ,pHf5O F;ENA3n N#:c\dڔ:G@'nVA|;<ŠzUnpKN%] l*~꺇O?Ÿ/H%$ ZBXYvJ~IbYrm[tEJ 1(ͨGQڙ%'Ƅs Dc@3SY-d9P ༷T)huR}ꬺ\\Lc=_uΖL_JWw3LG#&ZEM^9aB䲯Z-q[B>"1S*FLgr:1}jc{1/QL[&൮'^U|{_g2'r\tL}ض6Qb|\nŸtRN( ,BwƟ!~2g2r3]ů݊c-;Xaեz?^E.aYNjYiYȌ{7) Kfʶ/920H^z/]2eƷl  x\-ŢZ{ R)Nv[X4JCL4G#2hZs~rҒŕ5%+3[͒r.E^ )_K:eTyPL3H%ucQ]p"c ry)9~boW623G4~ _WӕX:rFvZ@˙5㝷nܚF5 `DSƤ/(D(Zsp[lshHɨ{:]^^x?T{]x;co`vDA&WMRxd*Y>n ![d:xOg<2ntHP؁ZZJ"B8K{E\)Q)FXBoib[x˒ H<]R`Z>jA ~rPOm ;|.>05S ݢu};-MpEϏiqBA,gE2ՠRH24ʲKsh5cOT*Q4c@E H&G %xn(.^Ō]RͻMJ M&pT{WDij.i5*8W(]!bW:ڨ=Ukq"U) \!Q;08Ar@AX`F/Q o2f I6:AI kZHI&CNOL.Q/hzgcܯwΠ㇃^7|;WsA|$-; YjKH&4G$H K'DԂ7W r@+ 9\`I( yK4˭0#k%ԵDrMJ!M*-ydK'(h"  qc( Dˡ&A ڨ<QJO5߰W1r68!&% n..#Q7 dw}2<\5 xoߜ햢7V;ڏvD r#;a=7?ZE0*"> 팶ig~T0n̘l{Cn8֐{Vf458cH:{2{M:{PY =̳fw7: v>ٶ Cf|:Mk奜y2o@ق$G]L!s E ?z5 r@U 0I۠IbQk0xy, ^FTQАH1&HiLj!9S1*7J1pX^ȹk2qkK8BQz 67JɾծclJ /PRASHBI"R0N$ hEv6x39階TY8ź i٭4@g ֌{CL IpydZ[ }ȏe5ۨS.8 2K2jLj0J4ۅB;] X)UT2RlL(X:@5!@ 1PI_9SfRuȁ.|^̫ Wa77^v$RBS/t墾7\*e~YriU_?v.]? ?+?_8pHDjvuEeԹaHf8'׊jȎ)(!9Њ 'n<+v2:-f"rU Sج MjAn S,A/ #|!v?3| ])D'ziիE5M-,Nz㓰SC_Ts:}/ԨrYNxv1U[u}yrzlY1\fN̕^Ⲛ[ns$ד2T o>=Fͼ'j{aHJjYfh#|̳wAm6Eݫd$2 z+7-^8{aFr(䦯NvlM y⏪n %(ߝ}._~s?;Ώ89C&ᷭd nuهtu͍kaMz:krMw,`Ҙ[. ?u>\}ӏo lp5cO>pZUGA'w+H6. ;i%U?ݠJDU1S>H~P@%Tc~O]> X۾`Zh6z(_Gl$qGPF9/Fw<{QI<{]k%xTtOu0yE|ahM mwAg|Ԛ ,pb2Ah()h-=iPg [PGCc&'ǵP#ʁ\ q1*Žr\g9>nSLv`YeI>A. wn=Nŧ͍2_kԢ(PQJF&oth g{2`PFS{gPAs撲9⬊IU8xD1p͒"p\0xdn WXxS}8.1pS*I* E*JnI)x1( t<]UPdB HmHd '! T/^E53V#gIxC}N'['JvN'z_rXydkrDZm1ɹA)M.51YJPWEF4:ǘgF9ZL̂AO &c.>Q8Εt@62V#g32Uaa5 ye,>).Ju=]Н<>;?>?^}m ͽY3B#τ9U)rC9f˭ѐK$(y_aVKI3䒬 Y  L@RSƜŷ$Q}&\xڱǶ(*#q@ultJ#Sddv4FICKHL]+Y751Ԓl$ ]FE3>`d„\dH?NsiWӣD.N11YKEY個.-x*[y RA%PhI|$]-j̞Ai& /xX;CUvxV1p uNp y?j)~ΛR1 K>]ϟ*8=BRHY5 nJ׍`*Ƙ0dyN{WDq9頋箸AN<9M 1 MZX+Ar:idYmD./78!j)E(=3¼ kۭt#J<9 $+{-6=}Ͽ}*FȠ ibܗ(TL΀sM`qc4wz@'A0 WA(n=y "'i321Eރ>cH 2i9e.T:ey"KŁO P$P9 \{dA@v9GO{YLI&j9OjJkuqb<7a-F1H ERH!)dZ^%E ʉ,2 <8@J[CTS` SIDFn2i[$+A8JZ䩂un̋ҮWt K{ wz1I/yL+O߶#[E د`z0^gb1I]x)#`뱬tzn{֭wռSwbɀ.D1vDfnU s76Ho Y4R Z'23&s)0wFPagtzo5 2Yi<'z%%u!zU@p`ȧ4]]_Kts58̶u½ǪE~`(/#R@LƆmUsF*'uhdp,]~u9e lIJT OCJ U3mVmPHb, IT1?>e Bǔ&Y2uZURJM"YY( )C-81V#G?&b{umi[֗LK/͉֔d(MzV$Ig){zxGeQ\tgLgkI1ArJё `N)DT t$]k5 j&sb)/QP6$, ސTlIq GlA p|B$C-[Qٜ4dQ)xTr_S<^S!JTxӲw5i& /p@KW:Dq;;$ &/l"f/M`e2?ιlj~tƓ6q+/FO1m] j~w4+;B\]O.3=ˏYXĕ2ވ>\aoFč[ZY^N7˓Qѿ /Qsfa $:$RC%4>ސުL^ WCMʓp7;Zd:e#2bl[3c_" m"&cr>Pk[ąY:"~eR!,lr%;*>;\ p ᪭Y`]r9*kWJ=ի+|/Yˇ$ұrn\_2~[w6B2bL:Q℉2o}h254dpp3D=2&zA.I LCB,M>t~0m@Ƀb!?+UV}B5\B@68 *[v0pE"?B{#Py5+|a2$p?^?Ǘ~h4s{.ZGL=xEsy.R"(eh~7Bwu}uvޚ]D?p> b"<;n'qhv? Ø$.ܯ{>)Qyy}^BJb&?>"7c =լrn]6ahd4 K!I'{:10( ]ɴ6l%8~|{[Y$xP>|CUPx[5nS1ƜJN/7]42X`` cB}DYhT 1jwhѮEI肻2'a^wz눴~fY*߹+Gw2zNME[!V+T?`].m;OvxHKs>?{WȍO!=.~*ޱ%$;bK$˔jnV=|†DKVJ_k4#-Emޝ>rҍ2]Gi}Wou_I#fwjʍow۫#8P_m/_z^7~v=ܘ?nq-q^-qܖY-q-qZ?n[m-q-qFٞ==D '_?}^;riIB16Jy` b6x8`<D&"!bCu<-m@cdZLLҌNhsRV2T3VmZecK/8!&РV#~PVU1ʈXm8i#[*!uV]qQUEp:c3!yõNmRO @X@\/,EʑLh\<<&⡮z7<݆y_#ʔxdFz׼_sﴺS-MOxYQ?HKYĿmQKD[p,@cQjMF~ $E+5jeck}qw,ERex Mwz%[Gßʐ{~?80}{4I)ݤKKn_PMKl./vs/a2/ )q\u'a_Ywxcm>7*`-* `h $߾E5ӫqo/>P۰ k{mE^-Q=2cT!h1YHDGa|P.,d,s&FF}!=~ҏ8l0#yڭ|w߳-z_,6T6گZŗhP%FȠ]I-^ig932HN#(SL<؆{A"ˇV× \'C0Qlg\6܋ &2$< )O6]U`VbLp4,/V M)AEn# !I!NYm8a!B΀:]-i!g&'{뱃Zhjx{;z74D-4iaYE2L˫Ō Ve6 t~ z!aMo^uvKav3I8: a|a ou=$p)sLu"9'6QN.F-mN&gcNg9o7lh0?A ~mWDa unvr6$C:~`{g6w[^'~k/ OQj}pvH[ĥQkwu[瀵 h$X.b |iּѧ=aiu}]3wީxm+X[ͣ hpCZB^вfwwcYg 1qb57Z]uyc'ちTrv|RtR7Cr2_D}J+obL=緄Uhۣrb0*ӫ(YF}?NH Tvc::.;UPF)?SZI/&id)ѽKg}E7UN Q3 F =ҵ4t~4pIso_7YvF@ذDvk'Lxp-;<[sS(hfu!ttIRFGp %0eI9:ҵ:/Pl8Nu8كθ $*Gv#GQpQQ>a]/IJJYbVZQT? K<gc'z9BNk5-_Ji?]_ aYH=)$hYL8T{yysr[c ^vֻIt̰C D41-#YjШ] Wyn$c-LLD.,͚T h,rv3xVΖSpM"X/4XtfA9TyJrҘdCRdn"t;Jt+b+u,5w9$-FeeE愳% 5pF^nƩFcZ(+c%d0EC):+B 9RA\n:v@:VTic~k?<ೊ:xV1ĶʡŚ͠vzJVZ};p5)+<~AGJD|> J:"Erx-#ir=k/OV?ĵ LK.1 GK% )%Cż8tT/y[ރa$$3ul]+Pm8[dyHDt5Ź4|,v#H)=koboYEEq2H>‹6.;#8 'cz$#8yxQT>NJ;!+!CJƌÌ֤-=9t]rcY?׾s4A{jV }T)>xЪ|=GOSԮ״"oFߦܣ }/z<[[5C]X|2cO?ϧwonb6; =bۘ0-boLv69ꭐG:Q 3'52*ܩ|u]TRѢkb] 'O<ҧ$u1U,&2zP9s<%wVfh\X>55&@g8{x \]?z|mw+|zDm(ݣ7{mkQ_&!&AM'u3w@Kx Z7d|~at{1lh;QW_o7QUT kKAhg Fyp59ZΤ"h^m{Qdl0tPMT#X<c%Fj2d%evr(5 jgDD#ؽx&T@:av3_Y~g]<|op6p|c$T$|6!6P@8JY#TP%~v2yN{߾`bWNv AS0ē0Ux*fFYmRwazKz}*겶k7''>`T o>ML'"5crn?L/]f4'72|U~1KFj6|-|p43#u59&tX[ Jea'mU>Qhu6E:7=hVkur]4"u\#i6Gj׫T4Ue\ߣ}_ N Mn?rǷ~?N{߾xms N> xVxFoo~DնYU EWC]vyMLVH-@J6ӗF/IW5=]sBCA}rr.t1imzUZV.!~1:]p[:jM,ĺyJ r&D-z Tp3HFU-!+,|;cȂ!,.ضwklk-{`|cS4׌B!SyR I * ƲbdAyV>SɡΘn?]?S|<ЙLă j)L <ȏٝ7=x\ջ[)#MK||_~<]dBa4hxDԵ#jc;ɲođd=)m>YOx<-9Y%K `_z!ict DV,[Rn1"k dIO*bRk^R"U2 Lr]Лejhˬ8mvK]E_]߶eƥ4wtzشq%loAos*\t-q3'ֳX ?>L\<h3[|.l '~JË4ߕxx{`fyPfݢQyQp10ꖂsˉGr.zr.}AtM(mjVgp ~>]ats-O#y ^{߉.Hw-zZM u0ds$>ت00c Qe^WKxA,.f EU! H NtY$\EG3qH+~xQW#ؔԨ'Ɲ5#|rҾ2=7,PlA;b7ߒ*vUwޫ~QKνu:y}^>s׹|\xVEL?_s#b46YB2F,holN-F́qdF^Y)KQ2seU̸2Ȏqggَ;gEztCȤ 1,XC T"21)$tQ(!Д>Ϊ;!ڣp#R%W3ɬ1TkC :ɕ@Q ԻI$Zt%!:$"eB@9҆豃p}X^nnDE YɁ*wtєT"(JJA@,Q+;9vC*`j߃bA[`A9@@DxBI\FʓIR 'ĥv)&qD'qyI\ٝԝ!t%7A|[b֥FkM$h4j8z>aBa/PJ6&dAa*|CcXeAK$;5b!(LB\k@,*}6eNř1Jk:;6bv֜8a2tngmdgA{v_&5^a[rn9ͱW9:E~&ܦHKI;@(b7rijpJ(9ֶGFAFnWA3WzJtaw(9)DM =6 T>"ʹݗ,Toޕ]# j h\1?󎐩< V̓ܮݻvu~n98l_oҷu&CA;tZy)^m{Q݄ MA5QAh`/g 쿮kK프AB m@gJ^[37g;5z~pϸ4U2/7o-F^$II"d%mC&mk(q"FJdN 9=^ɿ_b]YA+F{mJqªu*|Z;@m%O ڊݰ!NYw cGDE6G 6@KX7ڷ=7QߒtOfL'5Z@VJ6xmL" E 5[0|)(KEI׶;^۹|5e7Gw*e ռ5i.tYS"X'\&_@(޹XPaLN` Jցm !]ט%$d`n;=v>Nhpo=hv!/г{t$tBZP (^9Qy:T2*f |,$P1kxYI|HXL$m( ZbY'RiF,T$4>Q NqT.626g+0%*  1HZdw[qgښ8v_aٳ~QUs8r\%Fh(ʯ_pxFdKq%M4@R}#G 9et~2G^Z]I:QBc\x3zQWYNpEyѯ|7:sӈoFm=AH9D"_'qDq$sA,qD :9N%vtINX5\Q=G8ϖE .ْ D]՚x6ڲetG!yZ\8rBCs]Dkv2h''>*te;^p%/-գ]׸OWfT~6&ѭgjG glf0 lV\iOkzm7Dlo~yLFu$3]tM:tN),Pi#Lg > U̻l:Y.ts|1oy8Ie笌lu]v=+7_^#a#yb*d5fialq^鿾$ et(/Pp|݇{eÛyݏ7\9|!Cfᗽdo~~Ժ8rjnS#|vkK>rǼXY̎jm?7ߏE[7|9]s5ɣ j~8ףВhW El İ ^H}Мw%4cC+&6]u߿`Vhkt}Nw-zn"j4psds$EnF+_G;J 4`۳GW5!Fm zn7EW wAg|Ԛ ,pb2A()h-:a8)rSrOr@C1h | >fw޵>G;_GCC8ݶtIT Uu ~|:DIKH"Reb*+%2vDU%paIrvh9Y;]A?~?1V;-Dz$A$*L,k#B,'=j HV'S.yg+*l#cC%k txqjm^<\= [>sW)ӫoA21*N>@݅]~%m?VY8.]6TM}l!eptyt<.~j9]c}58‘U+n[/(?Llуp5mt{n}d~6M>@8:Xddm^{O%P/ v\Elg<-Au6\퓿rU\So!{蛞oΙ4keLo:\ZqloFlQCYQVQ=DXH0*Wf[h}nt R]N*-lJ ?@Bk!#]E)m%F+hFbZ&uWc /A,s6/9xEFnot~TC4{')W.m.6ky*l?rjKgo_<xs9 dz5]Nk)eЋd.(Yig\\4'8\](hVRrSNPbCxꯘSɃ˩QS"F)q -wNzBNUE2o'J&`trȐSS:)F\ń\|.0e"eF|&yQP+ @F Ԝ:mt: GUTD[/Sl%I"8hQI4㲹ǧ^kbBaE-Ê W WwD19mdKW.PITq]w[ \Q<94SxL7dE Ws'SE}*cp01qz `z~Q4) !*T J#c1vFrJ1,,&b!-t'[lw->!Za>1rd\Ͽr6pMID#+ M9U!R%KB).\ElnлeΞpg6A D #oGMLXVmJw,ÈgӸb j]QFm6`n3%X 2Eko9A4TU2h8AR2FE0I 2Dŀp\Ɍ8$S ڹxXqꗩǮ #"q@mlt":!tI )rJKQƿ5 *]Rq3>`(PTs0hDŽNScG3WCu]qQE1​$ 9 `2HIs,ZLh eY. '',\| \<<,&,r7<|+XUU% /$6~*q'7 2H%u%cTWc|gHz&=\sfyzU޾}ekP>3z Dą6z&x$QgMAϨ*{dy_G!n6CJ1`~Fy/m/*/%G@O"4Zꍳ)r @ T(MaQc4e,b< / T.ZS;}&ELR{"@B,ɱH,0E|;mtqSiS>>ңdۢ\;lglWy#ψy?%|[gm6;.R}t$VmS|j}"n;H2mI҅tiۺ;!{z+H٢i8|vyY}t'>JWQ;OratvhWqr7 t1#rK\q%YPt5l/ߍrիεwڼ9=Q ķ*Uuzv>TO]̦q֫ ǧQˌfP /O[PxȷEi@|)Q9_`T퐷m <|KL^ /zsoiSы=6v|>rKR]I.i_!~s>sD-|";|#ߖ+;ng-!Sz-]MM>I!E{u;'.mbu=g(ReajϞ6y⁖w)O˫ְ?l|bۼˋf ԫlm+b8&Sޖb6/ꅘsstnzv]x씍ok3%C~_3҃B YeLCm $hr. R:p@z?חY(@_x:XSI>0$%7ֶ DzZ&p_i=Uo?}pDQq}%_K10!^b*yq1q-TR*ATI빔T\(?lgj+Ň*#)ixNREmŠO]J4% W'Z*ÜK'yɜ,KA9WoY&JTJb+/ŗ.s A5}0Jney-c_o$C*eȡKQup@|.]Cqա?vա^%掼t*+7TGeTzBMR(3sO#+Us2hCҌ WրUETD{(1ryOL^-!yR^8yN~tbh7s5фVq'=3:)6Cy㫸41}aKeOa|lʛPc5,n!%&(e* qReS9!dI<Ӡ4*+Aώf$"9ZҢ(.-PoPb 6H+ ^vt$9{0SPdFК$ePՍ &mc5zWfc-/Z>.gԹQT))6 c0B*`v$OZ)ag9g@+zL$gP@?)h 0*( uZ 8:4$5cp)&;enKfe?l٦ytoDhPgښȕ_Qi7Bh\\Id+. Y,*$e{*=!)6"EB%O,Ӄ |RJtb%Q0<< anEeRWgb|TdK*Qʓ/PԊW5A/z5A̅dD єqy\9c>Ul!Zll[\Չ5uܤ\{v Gw$*3uEH/G"'BSI>/ZYM/NڄzA.OReמX55Fb3kvcX"V()N# * ENQeQeF $vms5*DJ!G>1Zv|`J-9 "x_dke*O[5gk@=ZE/XH1*(]9Lχ8ֽ_y1/bJ4w1?􁧘enzpѭsO85R4 Ey1t'urΏXwK -GI-ei~ 48`t 4D| ڢgKJ6TPtۂYSԼܧd';%)(6[qon).\?,+n稽wkGRS) -f, #k4fC941hldUR۶5[, y}PÑj=nqIBQ$zo8-^? t2=k־qMꌁ(+"KW2u0_5Rsx&>Q"drck5 _엋CQNka\&Hn}_J?I&|yΪZ^ú3ώRi~7Tq ڮ24f ܹGw|V\%Z)jr] =BH9_on-uvxpy2pO0C良֑&,)WK7ߕG/tM_lw*r_ӓmr]T:P'4V_]~O$0?`!OunNsj!i՘l})\dt 2Ό'+?yRXrJFhx  D]VpHIZh0 $PڇT dV%bna-ы,"J 1d}L)Q"AX0iUE@V<]'9=#g􌜞3rzFN9=#g􌜞9=ꑜGD5#ˋS|хhi_cv=>~jPRFw>9"k@o1dpJGًRdABBL`B<* 2:REjſ$ rC%hZ@dH'\tlT)N&E͂*Ba_QC;L !{|S}nTQ5'piC-!80kLNh**}stSC%- RӓU|n)td#R9Z+\jl OW}hЅl,ĠF*ʀ9RBxNkcQٶrr0`/d)TR dt4:[ɈQ8=ioJ"ucg=bYb j7ӎmQ[6Fm9>Q˒d ¤;*FVj-o0bE)!C4Â4G(:)(QfH.vs喤ZɆ{Y}P?LI c[DTQ8"C|l)W'8|4b)V./65O~$|]SD0VslL花W9&ɐȱV :*8񫎌ni\LKE#.>R7l g@Z}gdg)ftBf䈋Oʹc[ D/VjǠ='=GO˜R] Zygz4>hd(೉2#9Q!8OJBߒ ӓ8l0c}<=|I|gIR{_L6}-˞\|˞j9$jȵo M $6&a!sVKvD'AK5a|އ*Eq"i_pP/1Tr"YGl `uT֖Ȑ%olD6N% vF@M9)YZl&z`89=B1_>ؓO֛j[nKW xs[ѧfox|=:PHB%5au-)W+8zaxV/c!GED]GBd  ! r(ceZ~cZ G ǡ:CzS:;u!d#W^6_fowH1a|&mtI 3iw/߉e:t0;96"%<>JutrV&O4.( J}8*!'/}y]ݠR}9,>^v  ?? I4eݓapK8S W7tnнp, 47DS)]V&fq ui]ʡ.M0;vH=WR9j. 2U^obKXX{wW@SN"x4(֒sE$(!d"%Pbp+/Mg{o)h7<4|." .JLHBEFG{{]3L$1:GA U L帛|_E4Ӎמ(_X3; jIcBI¡l_Ί 0W< dMd1Ԓ6u:PR> tg;/[iKY7JtjVN(( i9ׄ7lsXb&OPD*V*T;*l5 `=DH)F#Gr exCu0VmcH6g5?DyFvmY痓’WAORs\z<4i˻ܢC&Yd?_Q:d#)H YMWG0 TIm  ʂd*N2Zn3ݻs,x#oerXѳ4)φ :/ܒ*>3d%(X EK/Y2);-\qDIUc;k&ΆvoVk(j>-my5e` l]Ѻ`ju6$#!) 4z_86}(yPZ,4!%!e%'#$P(X5Z\4?1N2 I^>,q`AD Or&W[{tb=N[{W75[H&阵(K\kOG~ǒsZa>n@;q^kbG-t<K%*8&bVԌg$֠5@ٍb Zd_8p6 &F;1GE]6͗JܡA*i]ioI+ > 3nlwc|1<"%(MR,ޒIIi*V"2="MO (HZcp~akJNn:wC[zB> *+l;3sRb3h5|#)6ksRp fM +#)ix@20 HV*1P` HlKj4 AmS+Jhi .HJ!*(QD1"h%4ӀZ}ݗ+mƌmst~ be7ֳNaA0~5!B;tV\} |ӫ?z^Cl/6FOSٝk"i^Ip6yW+9!7Gs^…SiqV x/PoGPOZڇzh<~(M `. X1h% ie*AS +!RӐhMLAq %|u?ȫG Ѓ'Av޺ bO:y-?̂^xmuݢ]G){u03yь $_{{tSgB y9tEs{'͉Q:hp^ag W=Wut5}a-H6U.JWiUA;.6jG1t}Rg>*n}UT†&'2qFs8Y ,0JK7~vI6:AI k[HIIv'gэhnrkչM.3e=TAShl[h<#D; YjCH&.i'iWD2)6% ڄPnb$*qRI.P3o\&GC$-蒌T;ohjqUF5ꅲT!L^.ګަT^kAeb7Xœ%VϦTϥJʓqj!ȵ D3ʃqWB)+98I^JxPr)nY&,!Ri4)*aȸ )sH <2f~ Z qc( Dh,@{Ђ6*b"AdTS>Ʒh:$\^ѵUỬo|&u~INmuVn;e^|gN얒ǂv8Q>Z%!G#V2H>n!G奦ٹ$3K=#e1^')mO *HƢf\$@@-ke x z0 )#&OEc%Z3_bbgN)0jq8gGF iNkg~,c4JllVFN\.Ȥ/#pi$L@jNYT@)U6^&=t1j"C$c, HXIaRޖsD.F?`<ޔse9x_[Ɛ -qYGy0sOS׋&ϡ3gz`6(By֍('s#c_'AG rD++.|`4 ^k'ЪR@xf$6p{u5W+췳p<8q>ywba}tpz_oGG G2ZXb ?wGao6-QiC׉S\>`xQmLϝ6MLn!F6r|:2028ͼ2Y/uNN'}oD^lz7."2o*ɨƒbXc1b~=?a^̫x׽uhQxM$2Y#Jmj+7-^(Ga6FraȏA:y8ӟ͇j mz˝p68EsNG?ӻw?~:ǟ>oc9a > =<7_Qnh,oV47q kߤ\fʽeY/sozhB1ɮOpZ([A&w+XmQ'LQj%_Ub3#ހJcy'!v-kǤg$"zn"j 9 Yv[Kt Xm[6漪>1r^JwAg|Ԛ ,pb2A()h-:l괲3x$㾁lb%@63%҉x:?T1[;/6{9[**K_.-h+ⲋ'T'^罥BHV'S.yg+j#V }ܼOV鍇Wu{UQk-)3;]G@Ʋr}Oc:UA9}*qL.^۸k6sAo5pt~`5oH51+nܽ؝sGNי_ߛ ﬦֺcu'/nss.gcm{s%.s#,W%,S$U*WEH\1 ZKV4c *1E3h͘S4c^\S4cfLь)1E3h͘B_4c eь)1fLь)1E3h͘S4cfLь)1E3h͘S4cfLь)1E3h͘%i$E3͘S4cfLь)1E3h͘S4cfLьٙo%h:|?.OpܠO8G]dtIw/(@.p5,%IV'S_qu|ٕOvLI}rx1kXH?HV8œ )Hk7$0)n7z7ynϴ0zzo(zz~yÛ$}\:z?7wgodOcpiZJ[F^os σί0y_O{д! -`dAhERƔu*j'˶ ԹNh B w7>$p)'B"я;FIzφP=WBUD;zBC.B)v=X^|:A'ѴV9He)#@CQr&ВbVR5Ō('ɪ~$GJD60 P>Åc9JFKfkPg_캱޶Ym=mb.ǧ{6 \EUgj㸑_';hVR}s\xiP=^fe(t}t6iخs͇ N yaЄCvayϻwfoZLPV'`S@8,ea4(DLJgP2(A:& "z`R R6^h[G%AFG\2+I LbYj4هMq+Uɹ}>> (rW %iB2tXK960kLNh*6*}{tSE%-KYo,XOdUe 2A"YITI2sVXQ gɬ<4xwI)^I)'8vqDQnkH[38J`[#ƕ7TIOakiR4EFlɹ(\. $C%\|leWPZ#clGv\6}c!XXx-2#pi_4t~#3BYA+< (9 UsJM869c#-Hc떧{!KbJVT&JF¶pHI{Sk]p#v PPtڲ1jG{G-K’/ĒW LTmlZ@3 Z2$OS<,hl`A4[I[2HFt0q)^oIblhNpaoԯm*0 "6}Q5FD5"∈wI|9d6\`RlrE;;$kE0 $6u%sDݶw mcBJW-߿L19M XER[,$cjnp#+⥎UC:l&%nzLRQ7l g -\/kl.QLq!pPt싇q?<|kY7z?>Ҙ#<voLrY/CYo=BzXƤc\g|?.G֫B6.ݍ^]w[N8Q oqnZn"MޗmygcR["obbZ"`Gm]`;%0\f,=PbqX׃doV]37p<9}˫VAC' xDg6=s$)J[GAK8/B54JvI >(I1!BpM|tYi;ܜŪ}ޟuɫ?t|C9 ZcJbT M $6&a!sVKvDA*K1a'|އ*Eq"i_pP/1Tr" ×8|dyWqs!L@';u[qo?toCސ_o,_a#RH ֐fy\͘EdE^}WE.O/提"掉3{CXwhin;7;oz{ҕYh\ LвvWݔwcn󅘸En5;n27di{aA8Ol=>KzIoƒTZrp%,C$ 2Q Nq<)uYW_vJk!`4l+1I"i Pu?O2q&W%0nNo%A3xiƒǪgwT||4xtN ۾+`:EFCLɚRȪ: Jh3`MhP$UqҖQsi6Qώ`J;DBbzҤ<+XrNpKBAx `A&pڳB-tdʤĪpYgL$%WpvԳ^ lu{VGХ2<8&/A+ZC&<JJ :?@KxwH!KXs&2\\ w2HbPR)HPXr2BAU#jFh'vZy_Njf>ȪiJGM2HuzI+M^5^c[[w/ T 5RLQK):1kQDVF璷sZa mO"wPU EaCxURJZD\HF@M[ǚTfv!CyXS8MK8,_:DFgr6qal NoJ_(Mg}" p/p‹/%ɷ/jSQ)Lg5`+jt&s WGߟ~~E]eH( EȌP<>9Rbߩ g}/,/Y:w}v?#3mlzݡdb| kk0Qavx~m/&ΏWq} V :Y\\ =L",rQ8eR&斸 ;z9aiu=Է;٪ʿkes߸|>?J j*nvurrK ڄN] :R5Cp/OGD7>,DSl:`K(Tmt@!H%&R ķDm=_4R5&)D1-aK UZG~4vKf k}qG[cKw/vr ;ź7o!E4Viלb^eZ=[},9Zw'iBv#ХI.b]X hɍ,ڀQkYeODo zGzlMhQRD1N:] .*;ԬYwyzEeo{=f7C+YQ>ҤhҪW|rx{By{~6>^c-Sؠ}Z0uaC_XEp=J(U6|}e=yf]&w" 2n N?R'zT]`1,5 s& N* `̍G)zLHdrIFEvAEFhQsf'# V!X JҲm4/A@@tbL>A .|:h5RP64>Rl6]J^$^y:xn&aX]Tyx* ̇U%k1MҪjtz;?9|.ϣ'Q 5g&o&Ţif߁xXc_Jp,‡oܚQ77jw>o7MօЅ9CM2hy9pVzic"$)MqA**xҞ$@ZG$4_LzL4$~>j)(9x:p 23,1n6|;]yO\U9oaNj )ŶW" A3L x+XEAx0J43՘:B )EVi6}186B2$z˔ȺuI r@)QR%fK=AN cdU^Z>T;Vej|#VdHh M B*f&?$֡0D@bY!_ދcqM.p 9FIv^ыeǒeHZ[3pr^fL FN&Sv{)x/՗0"mRKoUq}3F?}6jCYgoX+DxrGr#2KǮ$_ N%=./omXAy9$8`Ί`.W@QҮ]oAZr)l[\k)s_bxA%,uQ2]p%fTl- %mFI- oMTH/ל,xYNeMȏ[V z$~8]f|ۻB$QbD9:@0~pXO8 34,gZ͒$U9>2OK.2c5B_Lie]4{Ã~Z}/BKt;--u//q-~11D6ĝZѐ?F&)G}qK3m3-Zˍ2NgG_Z7̮ )]Mppx4_[.v퐨7~>wzHV;L24G7 7mf]Xސb[|,Xvhx\Řl)PGlˇ4g7-Wi#e`E$O7ν}8x~_w:εSM2뼳hq/NH!8zՏ?.ݿϟ;ݫ?zhV`ֺol{O{M{Lೡ[ mj M-V<zł2ƽF}䅘dvW[n@B|8㟗Qۋr?~k^hO+4mRU袝՟J26x/\LW8>h2L${׭5q;xϘ$ ȢV ѐAϓv&#YrQ8aW)## =iV}GO@m1Fŷ !)1<$0ã`.!dpD>cwSRgf'RLܱpu 5}v!Jrٮ-fen?t5J923)7hHb㔂`G$Ta>YwLU=\ǂՆC JāvYsi"B "$]@Kˠ Mڈ ɊӃ "`!8 %3Dg2sNx66b}ؼ7Dg%?5&!>46›Ct֮;trIP|qS&{!Fg Yl7t؅wz|2k k.ⳝ'ujRN5z_ ]^M\l͙uC/yr7tjsO l_ͶCjPf!wmy?͝/-vyvy[[~w(v(x10 /PqTDs˦^2wE̒fQFn;S_Ca̭OpshMzdG1ҡZ iP)Hvl +@v;Iw{ӊ={\wIX")s^W |̑$`9}U% d0,8!^H`Ȭtvdcs2:#'űt>]٬䗻yk?{$p;o;ǵ2,r0ky̒~{²ȕSBGDiRfXPIaWHKY@ɄQ%++qܬ+koŋE;lJz::@MGgMq>kL/s[:DUJPߛcT@)أQ;gSJ+$&M>s8}˰X8B!|\X1.\E 'pä9$TRDEo_$\\LBjmۓwI(zv8Ϙ>6gN+0B32RB;78obcY$eLAdFY;YGz/dO U:4:%8`ZdGyHoOlɐOl u\;ܪRP(IR VcTFTbAh[bM2xI4 T}^yd %1\6J_fcksI+xdU}&'w l?$ryYTE!n'=7ٓ"ʥ:fTL͢T2S h*2jk2bYipm`v/B&-<>HFjӹ*հd 2Bf2\2p4.^ Gl0Ȇ9 T%1=فRjmr pQ%v[Zș ^`Sh Z.st:ǔY42bW͈#\դ㮨+6P`I|cHIIR 44~!Zt`DdHϥހ<ުMfDxU2q>uV⢬HFH(O'iP*dJ;IJ^xi&p!pq_tUeH4 IyʪIZFf-4)D̩vN_ޕn^O\z(+A4y嚟w: ilB-fFȊepC)Ex}[XepnJyG~ a{")ln{ۥkigexs}R-zQD`@j,x0Co:DxD(|5QE |28NHel G!1&!{L:uM4  !}9pygSiLJ{kZܜ{J-#+V1IqEڤfzemUC-6(/ZXCMӃR|s6*?y؏AQOdO>D>Ɠ86 m;#}_7iY,fΊ΢}O0t<>MXmZ+]waZ8B%ma.6rH8W=|8jJ4Ab3Þ_UcM%ۆ4{%ݳ66H)WcXw]>Bneچ|!yېNw[I&>n:̄{#ǐcy` 2ޖ,o5/a߭hI;|kF`Vaj\+hWc66Gm6_d`nŞ>Mǟۢ!uklO.k4Rל%ҩ9 O Onxuj »079WCswtQtsmΞ?x`_̄ C*r0|&uMfl2o\Lkc_;u++~!GyEݢC2Ri+U"l X"6Ȳ*PT(i |TqO,RHHJe_9*༷^Hf6u(IGcYN#pLAJ+XƩWޭ ^Wpm/? ʐ.-se2eg!ƍ|晝{E/?8k|*ywVN'?eo?=`N#+1<\Þp~ݟRʄ:,QWڲ墡LS5'QwnٲY&W{v|y -^hgB Ԃ| 6 ?-l9q{K$0 smͯ5@c0⨲)ExbwJx8)߷77y*ƾ| f]z Fw%/=7og7%2o*FsӞ ΘHd@C>^p [yԥ >~̥ Ӟ٨DTKYlfee%[)TJ Si9C .L \hg635"%b^isGT:FAzmDravujt\3Ǔ׃Do]lo\ez )rmS5#dq1Xʻ^ t*VPr@eS\osRdy^DW= +eRMҊg&p [Mr,@5 mfRAHF3r A84˂EJ)ObA3(ڰQ!=^̵ !:|1=^}&:^ݨfO\TEQ<)1ttĤhߪ`~/A9/sKUϲV[Hk^H &+D{;"C,"z~Vq> ݁tTXO.=8\{2ucK_\a.K_ Y<Lؼ@ 24H xRf⼵DJHIRR"T0l9'Tcs'Oy4MCMU%>)N*RE)m$ d` O 2 :2ϲ`$M"EɩL h^HzcߖXy7MEcd>xΥ2Nor@Vk5q6ޘ{&cN0QCbd%@};'$.`J~M˔!iH)҃GFAFٳ?laǫnlXsvx;߭O.Y:PhmPlZE9&̶r % >޸+_F*hK\1?⎐)7f0X5Nr~ܶ͏aKBl5~.x6?zUmœ{ /d%gmlkf#gBZGim;]e5VyUAr"5ک *0ԑ#+Q׋B]WO:AR!P1yeT#h11%e:gdJGVJ9Xֺq/f;GW|C|ׯ?ߔ2}vX|V]z W?SmҤJiy)ed:qQhv4g$Z8.}_ /0ݎP.!Gu;Q. ل'2kJ{HLZi᣸n&_롺R>Iy&ZA8 : (' Zn=eC/6)ny"ɐY)RQnsV$ހ[fv9BTI~8 g  6|@ Z)Zk&CGv>Q>{s|?o͗PkWϓoeT~\6 mdo;,=|Ua//@,ՖH=ьrNfm7Fj&/. RC|Kc#÷/=/W;@|.)O:]_ł7.k*Z\tYZ}K/M;Aug]4#3P"`AYd;fTyw͔*RQ}2 KlnoAޤC>sƶ/sLm5(61SnlgC`{'ܬ^s'੏֖Cba_mjk>O]5P.S'ԑCESV>TΣ> qo\xVυ{>=m_|fI|F}T< z?[&y:?p=^3w͗Y)'HtBNe͑񧴕Գ8bG>ږnh_D*]un:[¯b\`q1wdn}]\2x˓67#T÷D=QP%egM z*8]2ts):ovQϏdJ.c4PjMg\ȥ4S5M\;:[;cH|RDsZzϽCMTDzo%[Z`$MԵA3Z Ct%'+>'eMC7t?&|i j+0ZjfH@uM+5RBLµ,=~/gBax o1Z#M1ZrV>'pGk83%21tM4Yti:iÍfM)K›Lw(.e`a"&cB11Ј*gs#QG oT#FSH^D`=F\R?wbjSEKG: Y//Ȓdc)hOBr>7'MUU oZXAԬF[ #ɍK6JL̇5i7л9d{r5njGYG7(}m "f Ȩ*OY@- i"DmT/ *oaֺ[7.m|i %hXēZFR-+wr27򰡥"7gYB ^|nƲ0kjA 1 ޞuTV@N0Q9^&T@d|"d9PP[r=JNw` BF`^vil "(e*9<7J<,"ɠ-]^[k#2e,qYgеh莬Nc[ 3R#uߑm k*PYj(E2bE F@n7 (ʡ7fAydS딱^$Ül׬jਗ਼rgUXF&Ed68 pGt9k2y v%4bxP To;1h a."9z"fGnB쨘:?%Q TjM\@g"5C'usi 6PlWd7M-VTtgqJQ!] Q 0 ڳw</:k ~JY+_PCPq2u9kh ")bug6K${*=tDR( NBDfNc3`5> @b$A db5WVcG=D &OKA˕ukX3fcƘ*TehpD9!!Ⱦw指fض3²MR۽?^s-*ַGgG].d1!k!0ƻG G^*K@tdYmjmljcVWTa1 'z;uf5/'ЃJF>@($5*2Ї`)Caҥic/7|Oy}d@֨Vx ;P< cx.c¬Hpk4=WwNu2yn&[^+I|ӧ5 >}/OOڴ.N֮ TG`DOfhP2Ǡ]Krm6^s@ QTɳ,jJf;Dƴ'(;{gr%bkVc[D 5@7@X] NK brbFQF^pZF!΃pFPN"<)ي+caXTי5I̴LZUה`?Amj 8x0"4pތX0ga1=TVRCDMRu@kV]Ŀ?Hc5ϐڳΘ&Hh0x@mJ5[Sy$`ǤPf)4l@p݂}h5W j0Lk9.m0e_ 3[9'B]vRIv|@`.5F֞5`a]ӿ4ߝU-eVnZkIQ3y(Y#`4vfc`ӌ 0#lRӑp =a]˩4Ts \A7"7;ַYUR ;0dj*I hHyx@uoުdq6!+4Ou[QW Ժ) <@'!#kg䍡U GʅW(aZT1T:9(&#ᦫ^uslUc!8hch0Ih 80 `c[M#5^tXV=ԵjרM5t&0jj=ҵ-9V;G&&Sf^xs9$jawUd*f ֔7tilpyP/FCׅ̆jS"qd=8I= z6PO,yJi]̒A5뎷4 LCicY}6\E*Vӥ10r:f-ߪiJxIAy:ubٹ:ՌhIҖRwY~z }@F(G#ՀLlߞ󛽸jQӴ-;/yElY(.7_n<+ѲD MGH sl>B' xdEQ\4G]+:E @I߈H@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D r@J x@07Q!$QD%qJ@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J/xFJ LxbsM:%[sWъ%(n767qQ޸W-o_Uj^iv;`Po]Ÿw> o>Zcټ*,| ( ]H(D% (Q@JP(D% (Q@JP(D% (Q@JP(D% (Q@JP(D% (Q@JP(D% PX x4 (XK+`wzw!E+J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%Q}ZWߝVSnj{}\P/{\_gWO>& vxK07.Tz% eDnJN7&e>k;wGktԺĬXd<OnPuD/Op뗖>[FWFB\~?>l$`m,kmJ.8vxW˟O޽Orq盂/ԍf7*`?rTeƒJKɴa-@{S$x8vR/l^^^_k_:^JMxw:.?{? IP#HL-8]KAu8Z \mOdAA@nGCj ^_ƒ+yjsQ&&:"F=\Ghڠ;a+0hxNS2kچIWl?tkz銭4J LW))iR瞮/fk VHbҕ;pkx3']=Zìy+t$]=v5FX YQ? gFG8Z{/^7_)+gutuqR/o?zXn/ﹹ=j J 4F^42M/߮FnlyyӸZ=o<_ޟ•? Nrw#?v۠.z]oGWJn1fW@>.vAw4)RŗE$q2ӬuUw=.țћx+C"9/y%M)DL[S3B;mu7Q+-D|Ж1?mop9k`Hpq,B_dt{ R? Kc?sj=.~]еR^JˎR2"5ssh,". ڽKuLVbԾCKCI\@[Lnc67?&aWtv7G9{af%)P>dzi1[HӇD:i>fu)uUZ(QզLD,,W8,}?4aj_DOƿc:\Xz'F|Wn&s~_"~ʰӛɕcǮovչ=b^ퟫ GrG]\|/nmq׆wҟdvU*l9ئ|ko=򻾷17wQ ɗB@i#+I?k祍 ?՚zmjzkŠ0-';I[޷-ƛb|lŗ[?b?=G1Y SQHͧ?Pz!Rg,2BeK]P }&8:$`C 1D̀ςE{1aPQ(pu"'tY35&(i֥B)=OL:uFeP0 )3Bh;5[jhModǫê:]r۩A#)ھ#R3_|>NƁǕrV1J T\uM^6'[ IHLoYd-,a:3/=;g@o y2ZzKW(R*)/W}hR%ƙ̹ Ut.Sye8F/^yΧR$T>Kp|8`IU)pom&bypti;iCg?j:ԓڞ8f ^h8KBfL鞈Il.u$ߐ L1b.S)i/OIJ:N ywi A{)~8}l2w?ŏy>x_.~bAEG.GqRFi?|1ZU%(sr6kve)k!0b4(uqižc]b/ޮ:N ~laqv'cp1%sfYEQ?e~N>gc%ryErzr֖P>}m A7 ov!/_vfM|~jۻ#hGEװ'ZƾoF2m[MG=G\O~|z|?4/z-VRWkOncG8sW4.3iW%r>-Df.us7+?ԏiJ3T^,ȼzfzs34wG 10u6]pJ gf)NMO4~a^ FcqS,ۢhIzژ"N[rec4`f!{B|Ȅ%ୠK řE o/RqY#xLp-h~ ~T7jS{~O)@O̤@QM&"!8âDWǗ~e@zBk<"gZzU@H`̗7_~;]nqW[-ԳGyI Nvc_ld/Y: dX {[w骢 _gNkCz>>D,'yB.MDgH|i6tV1ˋ{HE=1Ά~\aNu09WhBѼN\ĕԎ_X: )B2/8ģ B֥Ef.{LjV\N?i{G|[.JtEXOLI1XH{gAGNF'Cp>G8j2qў~'\ͣ5/2[,"ybfu`Ar;!I}~OsI98)T4AW!nؔXЋsQj"rI|rY|zdǍpoPkrԲ\8#Vʩx(>ɔ8%ukWn1>eLL{U#ͫiWU굴+ZwDIQC{ﰽ^T?xE7]|2 ޲ќ{Y〳F4)2Z7xc˺gOgO䥾̴`ՌR8DtKӎcȤ6(@!=OdX6=Gæs*ٺZE]w*vɶz@cLY^mR7惑_@J @J*gRo 47IH S1p P[p<:P9F `Xt* . l#=ùZxRYjv7lwqZ7) =~ŰBa\J{7=h|?ikuUvCIYŕi\㼉f!1C@WgP3r(Y82Ḍr{4ӎISE)R(r0 -W>U5G G*sl@l DHч@FD͂ |&[Qxues;G:{!lS}zx]r> }KJҦP(yn-9E*Ip"`u(#ȩtE:= U%33 ,Hr._ =QRÅuT 0++j5vn+7s2Kq^B0k/?7]g]P׹|KL.ct`R[oN-ʫCjX-'3Ezk}2Nc1]4RIF 4T59"`S z0hUn8E,A R;R/edP agonn)\fv8cq4~2}of@"FhLCP T1g!)8+֖\ [AXnig".xMM2h '129bf19a6M.湠v58yej>rr.#I N%#CEgKiTf/YVAHSƓ{@L!H@yB5Gn&Ñ0.JlI,U}&\x: ǩ(*#q@Ľ]\J9j0JrI3ZAk^@K UY`LI1*IJ =LH!Z54(+#b5v#+OpYMJNEY個.xe+OcѕQF(C"[jX (20|sjq*xNgNyTij?amEx8n~|G zgћhJ͏3XIҔodJ]cR4R[n~GpYgYg_!k~99l6AHS5%poH Wɠ- I˃!0-8A.4ХȷERh%26JPWi\++cC} ǞMil3]bܚ}v* t^\W`4iI!/ڼ/qgnvj-K%ݶdY>t NsDbb]reY20 Ht)x*Y5w%RpB8vN\o{UC?Rz*f5Ơ's kYk#CI(` A2!RR >G KIx}sTU[.=_GV/ψ&G\*Ғ$'YҚGYo@! c Th,2DzIa&dFE iɝ7ºyYH/$۬N\>T3{}TW%xOENu2va*KCcL Ů;#Ⴀ8K#-Z^=@TB3cǺR)h\]x{4;:_M:&p{iNe-k4@[V-1sNj^SgO^dORS ;|=O+'W<b֎:o$WR{0jk{?_Wh58~5ν F Wi7)>ya^fk_ ƭ=%um֞Lӝ!-ܰA/MQ$|4J$9H,^:jAN Ldxd{L^X\ X%U2@+eB\%QcbIxeLsf FR 5Zbz]{0kkͰ}}<*UVY&Ӷ68['PeJ袜?Geo#= u:HR$\pFd8 a:i*)`-x=Ђ$D51%)E}yE%VؠhPN G೧)E]^ol AdUf (5HS8MQ\wo]lp:/uty55-g97K 9D#6hT.&xZ.($|I'0ZQDݼl z{!)&WT\|WVM6)v:u|w\%D7e{;m1]9qG4Qel\ꏝ1M_/52Q{E`v1͌KϽxnqU.~[Dz2jeOrU7|e7nnژJ> Y,>ewfp0oWN:^WU7,v4#c@:o3ȝ"6 Ɔ?ۨkUǏ;{uGޟûOo?z'ͧӿ~;{Zd@j^ldNiTS,i.,ꞡөNgF7WOdߕ3KvA,މyOy惞կ[U^vP7('0FhLNBMSs 6'+L_r6:XJ3)'Ҧ@,# l'I}BsrȫzNn5c;܊C 3Js5ß[ΜK#Gn}"AG BE"ԫPuTyq2̦d༷LJ)huf"GwV8 ^iU*mmlhC ߶6{t2ٮ06ģ]դp|uU[5|gG߇tI \6K})yӃQզ_oq{J'6&r-iÒbɕ+zKfxdߐJ:Mjm2~{}8yf IM2͛]7wZpee\׃O_6Ǽ #c_}Pߎ+]%QWtW7Y7<9R~= r0:LΧ ЦͿlwWٻ6$WIpq66 Nn]F?%IsHJ~÷,R4gq94~ tH"σ2gQ,s>T#7waGrH`=oR!6fOpng$]?%^eo yGqh9=:Y yRa00@cUKhSUBYUB)ERT8Mo]\e(+`d_~ n6M:e)+/:/]YY'QگO⻇̯jQ=}tշe|h^偱&޲tab4,LϲO]T6e#[p쓁<>pO!! ");z2ˏJ.z眏JnU -Rz$u\OhE>F \h36` Re_6[qUƊ:˹ؐ0ukm۶]$m!hҝaut=>AV'?r\S-ZBxk=AS*iV>()Rz)p ҍQWqhSTW\Shf9Uc]ZNk &\ +A9;xQ ̝WR&]vxP<#yz_v Co 1 Ue7$1IV\w!d 0n7(bҘLB[lʄMOzJ#JEQW .%MQW mJ[u٨+uW"O> Ƨ>Zw\z 1h%B':=JZ JꮯcqUpcU{۽BK0J(hTW3PpafU*)**ԭzbU%GV9UZY{u(5j!Q ?x) U+8uǡ`+n3QWcuǮJZWOQ] 4! RWW4fgbT{uPV]=MuFݲ/!=LzJF$<^rĸ57޳E6DOДa'hx#Q˸jՊGqU_76{7۠WL`yLinZEJrx߭}2~ 듁Aq?Na5wgԝ/ɾy7eMYߗTXùLLp.iN(.v}~))g)?LOFdgGV 5[]wz/YX=KNO#_/KDN(AF`YA%zTݍ^ gK{`_¦lͳ"&|v1RoFhHCX+IВ *H@/@m.S܇ȍKA9EsC9rE8@2=!S&aJ;f2Rʃp-1Kȳm0=1Ϲ ֦/NכC,>* ZOZ>˯ΆCX,|GGcr-˽n2E'3=_ѹ&$}Ò3?7܋(fbLɆżӣI_Iw0:x-[ ɷ db\W x%zʃ`H0լK-gY""1Q[db| fU<9M,cH*3'Tfi*CDI$V cd(`ܳ0\޼p#5W1KbR{ #99 ` `]dx QJ-Grc-M$hY{5S1NǣpD41E `X="jSY0B`tb\( /AH%΅8!āVԬ}Tv#8 GfI^&w߀e'%v=]cMfy BpT'4i Wˉ.}/=애.]cjqxn6 KIs`F2p=+B6YR]YeI_'f *'~3;~~2|ݿ^/_D{^V/`A{C^4OZV5*v{נªbWW9 >lf3w% Jf?>>oW/FЅu`W7ə_~`_)܂Dk܅_)Eަ bGvRZPpe˔Z[ _|$>OGAO0y < 6j*,~mу KG=ik"G զ{ z=M R6ZRI(VҨ FHpS8HB5өdO[HĿ*㉭8:8@w-K8;.lٙK젍 ?UBXPNAF2ͪgkhl@![LNFrHejLʯMӫΪ{D{,I@Ofp,=`T$BAI f&p/ȑm~Te/| FHa'['h0ŸD`ii83(b.x/BFy. \G#I)Z eZ3*%"﵌FMFSQK[C5s\̊Gr2d#8UF)Q|k7S~~Ւ~3⪘<5Z҄)^^‡%MSxv0ȽЊ{J='WE`.&F Csh]?]%ofIZ2_pdM'׍o'pVr??LF=Oo6|σ}‹+qCGlR/ϛplzϾʊ5_6=tf&n{Ttmۣ.疋Zgo)~!Wۈ; [J0wx{67r^ڻ]U'wӟnǻ^ xly'wK;66آq^^VV^+k)L@ypQKQE57V"FMsFV;|||&i8;לޙ:ĩo3h[4.„!]  OfI 7i[M9w62mm9\KITNXDM-^G΄v(FRxdGu/$TBE3N()yaRh m+a$ Lc,-g!i;S̪nW%ZcŴ3㢘Mi8ߘ}[.;vҢx\TkH]f0 ù6o ø$ -Jne_w? n嬫%q*b`)p$6 ,P U U;e " Hjh+APX#X:0!hQbHETB@^rV~7aTx\5Ai.R`a.h!=Z #N'4BbfnEJ{BQO0a >*k,!!XC v8C0Q@>6k5*3tItN]^+Q[G&~E"d<ҩ؀!) iG%Ve*Ϥn`>7oYFY$$ 70c(w٦Ul`bJi]E*c;  ߺ5TblUFHakn51J[+bR3C[%YlUSb7bDLLm5TZ#k>1Z&O4`B$b-1BցNm5>"ScNT,E>}j/J6[B,yZ/&#|.i/(39 U\_Jfr7cLR137ܡpЈ?2Mww_\NOk(ЈK%:E*4s~Ԗc#\.L갖d2 )E_kx+JWT֍Y1+}wr^k޳駥_lv; ǚ܅Wmv^%>39+%l1c)^eLAl!y/!Z=rqf Xt'/D7kxdc .IrV)LF^T%GپY`eڋ]_cMDzh{&Cƙ׼ yЄi_xayϻOf(ѡB!jɴ}MȳLoQdJsD 9 bRBCLz`Tё.`R r CZ%1j] krTᖏGI:@֠M*K " =8 -ZJ3H=B%+ R=+>Ru)$AEbF]E7sVx6gY?n=cpn6Or^g%w>r|=ߋJw֐u^Igt LSt͖;q%dVR# 2EodlM8[r.1 7Km.>*֋+\+)BajَJV3c_,PXxP,\z2X ;?hЅzz:zVƠF,9 U%J Af4KqrFe+Uۤ ڱvˏ*{!+cTZU&@n'/ <7%Qɺ>َ{N%A~g<ޏ_ޏzo*@_5pzL&+ؑA4 ]PvZ`ιP=:=~Nsv9M_kOgG`0%ȀfNyBKHR$.kɰsV+A:  0HLC0Q:N$W eD*9xVbP2RYfbS:YVurҪ!NLu !f)֨OsfK.NjwcM]ny :40OaʁW_dBMWFXВv%+,E6 .픊#k3#Nj_*FrdDd %BQFk>JF3ܠDk?! r(n1~`S5+hA*&ԃHQk4n! ^錮^9#HֲnHTqDLR"% -S&'#?qMk\ήiRp1=M'Nd~z2{5ՙ]N藅>+!]]>WW k.0m-@LUo>EzG:'vW{!"zg}CqrDݫ]{wow{%O_R錮z~Ah yFC]ojKov{/CۯIx=p{/3rFeJu1ֻh).W67n!;oo#r7yv u[4Fxݻ-/u;Σ>ԯwȵ5!r-q̽|++C}NTO 鬻(e"|S}o+՞B_{8< ];4 yqqo,rD&!Õ-s\՘3x^wx>^͟ ņ3gz",Uۮ$ZZoϷp#]Mԫ@ˆiLx@VۧPθMm.{m&noXUiF+~yþϝ[ ~EckY>YN7Ͷl=$7$DI8WD%,C$4 2Q NsĀ>?2O!x}BsYxɇ!-ĤLފ -ed CMM*@8^Ы30nNW/%xw£Ǫ*}GQ!_ۓgm!}Lj-;|7}ؘi6YB2r]4w6' `0ZgcQl;:꺜ڍu9.hjRʔXg\P13x9D$O/*F%AmZ(>3IXmU]!`>)KQ2secTPadB!-㼖XC#-~4H3YJۓ ߢnC)e׻زLZ'1,HbD&(Ly aјU'PK@Ӷ01it/YR"k@GSMt\dR譬UB%{[j&Ύ_S#Z)+%`yxْXYǏAKEx媳48^&xq%+PEڲRloP[AC]\X$INjppUP0Kǭ 踤{]>OntLk u݈ Y4,R4MIYR()q[x\jso\R 4 j` ́d9ps}?/l &h4y7YGp\:ӠMm6MuSϏ??qF>IҮr=JJ x]Y1=2TY!sczkٛ7[>Q7?bc#_o..,cʙ o=6,˸,_ߟ;N\3ӮnzQEZ| &' ǥxD°7wwo'<'y7rvqy6Y2}_=j]Ӌbz~ѡX/C<* / H@`Ʌ.@^<] _9'?^,QGY,jhev%2*Tj DcAuV1$kU_@ 22 yI dk5Zcg#\9A*dQL*D[Wn%ti{z:wԆOew8sR.p>ir|48澚vUۺ93)s<[FkvO9?b7t1HUK[f TwZA9R.]u %'`Gff4I!.OyH7''z;߽ CF't>'/ӗYi1>dzQ<~O+nWwk2o'_CQ{׎;gUnO$G$|a=J;;Xo-[ma[65f;2gbZ\LdƳ$,4 8ߔEј__N֐cN`d6&W6V?!6]N[xlQHuEEs* yG2:,Dkl0wBnBޙgF8WzŽxґR= ?HA#k/;8+ ƈsN "> $@v_APE!)veirҴHo)&Eyc]U}/7?zkMOzz i<5{K4q1"Y2CZAQ4ֶ]$|Os^,*9/אlHt? S`c\˽Jhr#fg]bP4`gɮw+yg+VU=sr8`H\)qfZs^vݱS|0#]8>Cߗ(++zN^WB#GV:v+] Jh?uJVjJGˎ;5nt%{ѕrcbȾkW֬ѕj׋ֺJ(<#MX_քQv`r:VNW.k-{hu*_+ |ŋuch)E\1r~"pZyF?ZEы*Z*R+׋L'?]I>kxv}qv+A/*9W V7ы~kk/x拓;ql4j߼jiijhvןZuv{hwԧ::d?g`):89S;Lh&p(59SוrEWBk'+nj4u+{mѕn`OGκ:]94NuZȞ(Cx1*4 `ҺM UhZhCAɤfM#{&{ˆk.boeS=aD@zCڎOO7jZ eW'c% lT7\{ѕ f]Ӝ~." n^tZu%g]}6O:*wx]fqs^h8J=Gg]=:znt%]6Vqq1:nt%+2J( ͺ:B]ig.] &] n/u,2.2Jc?ձ]֩gWBլ#ԕ}OAӏY$] mS(|(u崏M^`O95DE6PH| FEӠӿf'w͚>"MGFȨrO~6Y|7M;g^z;6 ]35oLε<^v!u45B&?'q ݑc?k]^ʔgW〟ajn<QˑUW(xҳ[;2JpEWB=]WBg]8zc}G=p{ѕYWG+h ѕEWBhʠg]2+ u/Z7J(#ͺ:B]9MIWN ][(Zm& fvuҁ6~rI'<^7(wOpU Ioв{@zboz877P7EdtUcyEܑapIu+vJf]"s2OUd4~t%6+%uXD'A(`[8\w\ʮFQF`ЌЕuت'x;Jp^{J(uubMTѕz~J(ͺ:B]3ek;Jp=38SOוP?ue}KOU7\zѕ_8QWO#] Jp}7څP{m/f]̳F")SkO)"ݺ^{Dߓlu?vU Jj1j:jx7hU.M ZFlW<Ž;̎ 9ηųJ/a6hSFms-O|{וּшkD#2Z>:Qh|БU0&}*nM (|(tNgo}H_`G1c\\J eCBVW|>p~렐NWby@PE!QYo; E~/?jS:ێA !^_۔ۛBW^@hPBZ^}]5@yTs=(cϳ;gWyO14m?Ybkoq7^npv=Z̓wY5z]k~xZU^Je-,'tvvӲN3p E ^sa|=G*nqvn:{25M[WWGs;ͱ%6a%i6 \5YhVQ֎ \ ~O95wF#>:ޡnߝݠs?nt+w_[em?xHTFѴ0dnUj/9]Նldez燚}2ۦUN!JUoc֧+y8.~)¶M5 iI}d?.4țARmP"#=+X'N3YjA ᭴&R'o-B[%kAeD25P) 9Egv[iqj1ڌ ruq)նXkKͬ"RTl2YIMLjJ 1M"4=նC,} :ƌj!cssaBJNI9_=#Mf ?wkZRdcf(W(6ൗ! kDWDĔ󀪂aZ.Ew!PhUV: %>PfgB mAQ! !buبy/y*"C*jt(veS ad]DDiBѭgt -_'TPTF1'rjR< !jVU Bb1!kmm.W$FUVX'r/]FUCjP)"hH5Rh)i3HT*.8{=h`t!Q!S# _zi@afU/3~VQo ָh`}ѷ($xkCedQ!>)Zh9v~4 sm8]I&j!VˮʩćH&ffMe0HkBhm JGW6X4UʺWHƅSk5P*g\u x0E&ӷ*)7fzZL Ud\%`}{*2(|eqICR}!>a'x VtC2" 5d !]0[m\>a!+(7Ei2*YSꊔocYlЙl%@ߌ^X !HՐaZ5qCf,Onom10}j PDd"Cq038claU( 1n͙=t+,,Z1>Va\FФ ƙy?Z_ִg1#.UUQ &sdĘbbHjDM﯁YvN&~,onR.k.ӫ3ޏAZCY`Bn3k%q[B2T,^#B;شTcv1]KAPӤF8AKJ=zؔh-TխSVP黝АV%SdIW}HV*e2P Hyd0'Z%;Zp =h5'Hi&\dbN3*eH6!+f0XGZiH^Bd+4K `n ڪ mra5ns22?/`it۴ٴ9Qf,m-%u=IFOB=V`&c˿[BAIl1͢QkЬ(ךB/ y5!7vPƓ_<6NeF쓊UZAA3`ж+fsCHqh>2L gWR̩dYU8m>e<Ŋ8husX]7ߵ*MPe]r}[ ԣVgS[G?瑂 h'fJ tN b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vH#Ր@Hp Dph{'P+ d @b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; Nu!`N uf0N b@*ɰ@9a'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vp@!9 0'a0N q@!H tN(@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; NqZ>T^0:nJ[Mmss_ޥ:_ttHD>ԀK+dǸd? t`) vp~Yq/CWۡtjnAW]2!ѕ)`5ZľԒJI]C+kCW6}+2] ]iE{DWc ]ipC+B|"A3] ]ADWFϱض+"mTNW@)%]"]Y-}+r8tEp?##EW֋}+Bҕ!-Y\'PڽWWr#fKA{~6Qh~£͐סܑ[1;ec[rKܪgmof$Уoݷbt1[h :IՉ0fL)ݱ duQޞzG-*lXh5~YQbi=F.1?fS Pv Bf'۶8+&POO;5p*Wj!͝h]o 뭋C:NAtw_Q./;:bߧ.BFcm?cޝn``t @K(\7JX7n8 `i*Cp`VetE(Ud:@>srHpՃEh("CW# $[BtyCk^(۳ n rLWOziua@t1ЕF=";]Jtut =n C+z{uE(b:@8? `mnL2F}+Bi4ҕ>it:1uEh_(ۡ CZ"~8F9t\LWHW!זc!e2*ԃ7=]|H3i*+4M(a>DZol4B=K?MgY2Vt('p/5rlTe:K/WPnqq>U/1~z!@In]٘NkЖkw؋c&w&ETNS_2?g=Ӭ;9VoeɹUSт^`MWBPɶLd#4;vOu/۾zxפ;SDN1*>h=r8 OWd4"y!f4!8k ݋?i;/]2:!U\g@tAYtEh]w"xqdc2/k^_ۡ}7l3yt噮2zÀ?"/ mILWHWZHa݀ 2+kPtJ+#CRWlp~ (7tu8te7*9"f0kW}+Bj\ϱض+"'@id:D20н4pcqw ZGn^@o mkozavh_ۡ{Ѱ]v.X= CWRaPnVoLf?Q+^ySX%#|8}utr.NOn̮m]~į[Gҽ9?8Umל钒^kvܜG}v1Lʼn<y'3[VX oMfѿ/Ox}΍+zr7##pv5eYu3kZ$|C8T tUC2]p2s&T/ޙpoZkBin:-~E䆡r4ZQ«'4Ӂ{)7y7:e'6yΫ |CzvqrvHy]BF;3ۢtrzS^~ Z$\3ѿO.QMDשtg]NYu&qA?W%ߖo /'-5c(%};[:竫=) L_[}e}y{Go xrr}5Bw'u1w!ϿxRCھ0{UӴBS/ny@]4_5q4S/S(Om_fXs`x7Gd/6=Eo9tr%TUͯBT@R y2oq~vR)7trLg,]_?﯏^?2.rDѣu&+U<hu<KLHklo.wkh"*vvRnF'|NE9&s-?S&0\bN-&ҮYnsxq2}C 'f;޵ s"j)2/sRY=^No%c#QB\Z)'~]]ϼlt …δUhǿZEГEYD "葚^[RQ"LRdRq[U7l.FUv"HzSbYoC% ZYW9TE>KkZ-Z ˮE|X/fW{$~Aiӫg}|Q/@7YWKRෟ[՛_&eUXlq֦ڌS}}PZ؆{ZNϞNz#E9MGhtt9]Ne0cKmZggL~dlzOyIFDE>+kmEoP6l`z{2&@`6Cڊeɱv`.*Ii[TtEXwn܏d Zbvc.1bxY"}/P3jt& Ϟ.χc5CAavdףڷiZǣxBi04\Qr:ސKx46-r?~n=>Onƫh@@i#0] ӎY]>X|4<|7jmN TʋV{@jQCWT1o(L7V[b?(2KGظԖ #i\85FYthnU`HLJF X rc?P%YBʀ/s Z^s֑ך3DAY@XčbJ++6*1!9r4,G␵Cj2e6`Y:w{ɓ"E#g*Mg֗džo( 1эE]`a;o%9Đ`11KEv3J rN "WLUK5P96m7|s/<Э;;Kh&UǏ@#`9Ԭ=ņ,]~U4a^1>n+l;Ns}Zv-LmZR鵸NETw^3gOta&{1h4x@2My]Wei=\%7b.!lBj&' 4x*1&!"j .{zM%.Fl&530{/l`)+Qk(B!rpf@~*:dkiVʈ'Yt~'!;{u@(UU LuP۽|c{cX **RQ\;œ3\ӯFO2[լtBYhJ眡N`]*iL\9C[ TCY٬^m:;r6=4 V5LD&H+?.^nwq܏VMozQJkCg.,: (J*#.*e1eD[uPJ1W[4/NB].Ep7FDƂ2['UtT5 l9y4<9pwkb'Z̻Fs$֏t5 FfUX>Ҧ4Vb9'bwcχWNͣ.kԮg7-F#e`Ee{4 &yP1`22םSM2YgA^8>ʟ?>~n??~N \X5`SIp j|CZCxۇ&V ˸#wf;1k0.v i4hz<$F<,1p6mqE;?dbeV[b‘.H7>h/m] 5Ž@ҍhk t7GMp$za yMYb(0U Qyd!ۀō ״C48Շ=}:ŚFZ!)1<$0ã`.!dp=cөә'ogc d➝le+> 왡|,<؞9lC3pC5yJsejNci:tΓ8$6]^A\0^-Ժͬi9-.l ͫvH-j̥un7-W=5=,Ɇn=.2?݌wuf_4)(I\+41XLB{ߗyۧ^k.yq#cDγ^3k@!, iE խY罹4tsOC#7@NRDek^ր[A$ֳG1_ bJ|<#fN.F9Q2YsfI+|S0e:'k{P:T^-x2 ɔ\(W2x91MS'%ս3/|x%v,/._ۇmnu+tRW6;nn93%kDnE3(T9;17&47(|t%, 7Vܗ$KS@`w:<0lr*z]wUӦ!}6;x-qgovΉMΉb=|-f)`"Fn+Nta4[.Vfy{؟0 =΄Ï`L/^W%TVqe*Wy BC)_FOKKKh^cɇҎIS6,1[ K+ xd}xt}HTZV _Լ*\W`KYȵ*I*B Nڔb2茮QdR4bRE?\#b6BtPa6Vިզ}VEˇ1ZMNJzH)R񦮚w|ANш`Ȟ`(w mhNhifd!`1 zR*NmқJLJf6횱V Iƾ*Bu{sWw܍9pd4Ν֠f4^}j& /abI琪J c{@g d n̯%vKPȞGD\66Qˠ%N#ce.]N2֎5;LƸB1EkW}6y{I9DnBEh^"q6NR1Yl D>JOP$L$(DEQpL&-K(\$RRm:aKwUx(|Ee(zkMz%LH2&EI4'+HZiWx-sC.u$SduokrF7N@dm@ I' F$KDz.d!֮Wm:5iWH/NU"YMJՋ^^&tBJ&#M HXjK_4*y㥙H׋ϡEV}~TXRZo MkRUɗJݨc,;:p7g#iRjV)e4)DIׄ=C9ڳ%nWq`O'EtvBVנ~'YHcb1Y \;/m䙋X:p8H}s$9!u]aSNjEg6}Na}vmV7rv}RH3(kPrQD`@jZ`a"1:Y4BW @E |28NHel G!1&!{L:u BiRfiZrW6ZyN. d(x|]t+qpHw.˔|<ݡtVwPۻgT cP/3,߸q%*Ɠ")h7i^$lFxPK7ʋ,+:gryuT=4VFK v^%?l4XP&ʽA*W9E ZsܟzNqx󻞇< T>ḊC $Z l[џڀLD':va>سmz(aI>Jx/x. g߸BIЖp{gڂhe\BZ;jj|4I'E@+*c'Pu$X3,Fk(;R`4n0ٻ6$W 6]Rcn7vf04<"%4&)_Y,R̒h$RrTLfE"3%jvy>;)Q/|Be^;?BPwz9Γ{N">f¡8k #eCVB ?NE wW7t,.H40S|?G]|.ջ7;Tn^0GK&{n;cC_SX6+w4B\0SFրٕ;b{fFN1(cy2\]׬>j^b[Xh{}>YSu֬K wgWW`VI^k&"<z zBk~,<7cB;<񨞇0ǴݰӭK<ޥ@{W0vǗ0F13N,BέsUfo&~.'*ڂ,RZ蠝nX!`UI)o[cE4i.e&LEGS)衙Yht. e팉5D,d9Lgp8<^>,I=I%~s#}/\`<@R*NEIT Ol4 3 E{SJƤҠsDǘ'ԧ F#YbMY B'k"kk\fc/zkZS'UWk&ӈ]w?eb2gv){z(*edE!c> yQTמT 2\n M Ms.7 m4Ir()V]x3Cfҳ%B;i"#P2LdNX41c$UE4V뀒y'{ֳQl{D-!OK;'XLΆk rZrZH:"$ɓvRj!h7@*?<`z;}<@ SZ$0H 탗$\CNQ3A=SgFP7OVD\%Yi)9+i,Q H!ы8^G{L>V=ߏwʍJ:B $0*1R ;* ѫA!ٮXX!)W`zd't7PkrԲ lhN+Tp"e+>"0x2 ߥ|DZ}k{Z#hc.掠GF:F>4<ˀȕ.7M2ED:RDl7H bnoXzw=JXś^T}q5re1H+B%T淩+H߽w bl~O>;`W}[۲'}Zj`Yr~#㟣ѴHT[ͯ5` empUb; !-8Lw?냤uWä4k,8'h$Ld ttY oAf-.=(OH0o }MCۢb/>E'}vYE^cZkwӮo -Ǜ- Z wla|՛\-d**{;.jzzRl>O+QdU W?ORn.m{QQDc}RxttIWX} ~><2t>鞲|4*QygV[/$5g<P9}(艈 S/"3҈>qeG,Yȍ$xnro&%Gx֊Ie%D\£ѪXbńIDI1M.F1pHeyCyб#ұ)\> Cw}Lq_rZ*wl :0z<&w"=k繏46ykkAB]'bpPǧ3]] ;r3MҲ}fH9L;az~9j2kq O+!qɰˤn< $RjQoL !DӫQ~H$=xv?{~aŒDo;[%He~f.vv֪ 2w"Z{r-}ܑ4Av(C@QdvFA^ՃYh7 rrwa17?d~.j1ν \q\KrCCɁ^j[lu:7Fv,YaQ`ty5/mV NkE/CC$QX+yd>e;@ O޴'oe7q_qLؤ UP ୓24Yg90+sR1O.S5I*UtJ2p!EPQwژE Yԛ87ݶBG޴ߌ*:tg̫{xq9R3'!)~@M("|]g'?qV~$r8Ǎ2yjk֧?C>fCm5uך3aT|!%))9 A!pE`P4oaY$ctI6`5 " ٸ,uL fx*$dBPRh,pLQIط֛8ld%%?lwR=S2;xL>fe.xEw9reL̙#KBh8fJEX!BRb+.ӟ)G) h2hO?,$YN! R$I5fcsN (7NDhLvN%%D>:U֑֗l#7^0RL%0"H]QwbJ 9E !L6Ђ*7.&%TBD N9+onmJm m5||W2WgW>_Ϊyů>Kx3 OO?շf[ - S귿Z]q|USJo6AID3L3Ax/G`Gu+ 4pK@ِa3.8b'FKRxg3|FOfs2OpV3@*Ga\Ar5R߈>S]Ʒ-a2ppF 0TmjKWek~l~Y|S^/oցaHJ8./}+M߮mT&c MUpyכwmqHmv8Y$ejn$OȺ`H<>e0;wuayJj2'^u\]|Y-n gQW7j߳*6tlX|~LQgt2)'6Nwd8?ة "-:;Z||ί߳8o߿O; ݛyy 7'=[3Alzrk.ER=$f5Mꔖ]gYif.B\/ f8o/^1̓{/6uIA#H*6ul7FՃ쐕W|C-lBlb^ay\{~?Hͮ)k;KL1[ I1!>ZUH}a|n=ĘY]얮'7>yztInl>C56w]{f4O*Ot;n=ں[=/lKmtͫ9[vܲ[Jn7-zke ZndէOyOʰC@Wt].5ESo8㯚= iYVk&v)?2=L}37mɧ5m >[:Et:}Z_?Ѐ[Mdl4[y;vVԼػTy~`1'XG&F49@0^H0TK˅TR  Ȃ){ QeF(f4ESQLA !hI&B(7fͦ3OcHtۗbvp^NW2ifw%_=9TETG_hE:dGDm{fVǯ ts";2Tuafzk2eP񖑻w^ynV4nv4`1A;|)EH,T_>kO.*< (j]7LjC+S9Rz.HXlCԖŶ]JcP :3E&Ifx)oBBP"eU}  7&Ǐ#ɑKR_m2hG JG#*0vM:]u4lh:coCePٓ?p;b}f'㦗-* d Lu6uN '#o[tsrǃp1Y)B01`}腶y@Bt)D@ZALRAk>$ 㡠ωI#zeHCLJ$0'cU2X_3H=CMV%5D~P\I&ґĂ2"M +jlWT:Y4MFBw39)UC 2jdp*n X-,Vf>@+R5]P$m +S(cilɹ( \J.9cseW2JTAJCajͦ*Ͱd슅1 b?o>V˸npٗY/A..&/g7_9b;T! cCeirJ wZ#:&Vj1ed/d*FPPaSM#l0Оǒd]쭦*QCAfұ+jCcԆG vcD %/ĒW$-PAD^0gԜߚ홠E),OS<,m`zhf̢̖"fI$ GYE,j!4fَsRMx(L>vEDՈ#"ns0ʖ:RLI 9ŢSZPI_cb( my'@ѬtTgbiBBi9OtmUR'unfPN=񫎌UC:l&%nz7Ft'I()Q bAXmjl ]L\<-|{?R˝#0&%`ZF|ʐ6IRj첒 A:gJ1ANd#",?9a<`4I>0@0lj}A RftIyK,lsW1CB8 =ӌ-~@Y-QMr02ȀamBº`,)W+jȨ`MPET x&ӛy^kC4u0T}02[_>bQ:m&k+%BI2-01n,UFp2rC>ȲM(Q@I|@Fy^ZqWF+C*PFNGQE?Z6'Ey>U9ld/F?Xa\1( NyP>g(d:iKP-kĀFړF:⚦yu95kS2F ef;DRaEJt@%SV a]=yrb'e:<;*yW^>FUO?L4o "Yu}8-!L_+_W.0R~'UJUw>\勅瓷t5OIOULIOfJOoEts!==?7)o_4?~yXKWȦxlH {e`aP]-WVwh?79c߰036y K:^Jds38mƒǪ#CP.u>~cMWeUGsHbRX ֦N@"Jſ胎!nhV&S=1;G!/FO&8L5{7#)1'YD*TzGF[3V=JA%#Gr f#^gtW2Nl:,ܤqBg4Qkk򼿜yYzyxOٞDAk_奙|&ds_~R23$N=wT7ێu +)AŐ8ef{p14xԳSo!"YBxҤ<+ѱV|ddDIE'v {0ӵP4xpd SbUwZ∬CD𪱞5@=_p7jui}]=yck~b%ٺu)j%Z1b;Rf<%J:U |:$ϐ%PW4F(C]@i32RłA FSn.F4[=+h}eUiJ-Kk1!:= n!`&qPW{db=-n{ TLELǬPZ(5KI3j[)u՗_THl%";*mOvG`'HLԊW5/zb x=B1"񨿥zcUKX;[<8k I7or(^!QNV +9(8&jv:!Ujq7|vvıy1Ď;-5t1'F`\*Q4c85ƍ5/XiLl<%Z>C3IFc|-a UZR7vӖtTk`=8: =/}?RDJgÒznW>b|&+FtMː ܬp3>}t9/βk`~ZE%NJuћU ,oҎWl/sio~ Z:P}Ygf)*OA[>@hIEJy."ZccE)wqH l;Y vd"Yg3#; =/4hi,w[r7HV*nFަF.I ڣ[x9sVAKlKr=bYc fYD#3VƷ͎@CDpR:XF kn IB(W&$zo9K8 t2m{{|}my)zeHڲ\fz{UbV+RLD=_,ck57qYᡨYs>pp&,o~iϦt?rAѢ^:?3?g ?0z>c=U^_;|6bn}Mפ2~zy:Z">S5ˊȑl>vM؝ĩm>(wJ+ʦ);˘Tװ:l+X\61T3Q`tˆ>9zB8cc(kdTc( RltJ6F  JیAckeKc{9Tf=]ٱt~\KPݔ[nl/G2;^Jo-_xztUVbbgr]t :g|kUy: ۚU%\5JQ4TL/ IHI1ђf $T|E%Amk&Q " /hрLYBV&(kRV>M+rnǝlkêH4\g{;%IkAbY-)bD*\DS ![dCiPtZ'r1i^"TJdm<ϚaXڒˤ,xY7Y=ȒAd:ޫ9;J AHR4|zmhMPU,*TqM`ЮebJ^X-`?|[muO`K$ A;$"eTf1U;U8 >ɘu!*j̚g$#Ot5WZ#q["T'S(푌M) { Y~oC[x3hptŽ@x& }jR/tlTRgѼ4j)`1vvkjP-հ}.{wr\>{~VZNZPsbGTj k7BѢʒi uH@ H(-ED:'Jplmr!6 4#gK D^f-S~Ro QD>,xK~gx߿-v£+lo޶8&tI>:x4!f_$^ӛͻ_3!":u cKWaIdtd}k?{wش<{#֍FGg~Z?eS=꽫cemk k *mK[w!r]˓kU$:>ҨfA޿hWL}gŧy{-^vl2~3o_Wix=]ϓ^źi7nKjymaTɴqIƵk_έż݂6 b; GBZ:X lrB+[!K"ؤ 6S7EV;[#iJg.]QVdTPUW.}X T!)W$db.EZSttVKVuھ&]xpu]zp_r6+ 9Ux##A 8j7B4tv#a3SdrT]m % >5vX6ݸ;+gGד7ͫjb;rw5IpJN s.]ʇΏg{t*/?v@ug8k]͟F_ \⽿oYwhX@F)Q>kY(1yZܗ$3΁,K3ᅼscb8e29]Bn\ës˿|q\o^ȗ=PNMx(I5D?d";ʆ.:tQh‹2LY ރyԭݩ2ZsƜ*:,Lw=ϫ6ߌfoOO_WS!,TC-fTL4;o1%[>|zFz;w'GEcBDRHu,v ~P~֒2K-02 C k.Ƨ(yҨIT|YS@9C2[ -C̄Eg(WBw " d%9;FLt뇽k>tznIn=x]L^rg׋65ѭoinoq9%mp9J\N1He3,T+}r m!"8)J #؄DwKH*BR69,T%{#ȹ]YO@ɼl3Xl˶fqM[7eH_ wn*M9Ed`r1QY kUn E1: ;}gKjpYT|wu&x`,|w_/p K_y3?ҫ Û>s<<nSd5!#fŽ.7{>Qt'aGǵt)[c^(35|m[]҈U>@?0cw5dCl#`l?JXmG藟_޾ϔSGWy7G~3~I7E=濜І]KJԖvl'1>$^Q~4w-a^xsKeVYΆ46Xe}g:C:@ EMX9xӾDogΛț›fbR YЈ6KQ:@_Xt1MJzAAYR$-*1'1$U.YbaCY#P-ތi7}27vf1ݒ#`d ՖR[fè&\N[%]9$^V4ƒXYzUeBd"YHVW]9z/Bj]9;(>69MNiϱf|rcQÛ?fCtrMTzi3Ob[ ߛTw&k_k?L=;D"jHTJ28]a BQ.N$c_EМmsK%FQrRR>J1);4Ji VR5c3rn׌JVqW](B9O վ*2%$20K-ѠÓk2.^6+daê9Re`sMh"jAnyY ҪM|r&:bQض2@\ITD Vܮd7L}͸Z[5jQ%ْ 1Zd66[vlE3hD)][oG+yJa]9ާR-H.IYq S=3H=)iX2g5UU5_рZՇIԏ 2D30?fBu^p>ĺ1k;!5rҩ}ɤ!+SшǾy;ネNrRDt_5.iÑ1T8% Tp^ ӪF@@KJS5>  '1T\e95GE-\lKՋe(:MOS]J$褹T-G %'',7:xx*5WʖO@-4F~#w]<9LR'uGJ\Xj!gS Uj|bxM%XssZAؤI ", )'"v"eg* D% 9Xs&Q89S1;OR@ %' mcFv0џ`'; 9x7# ?Mђw4ݭbs7Yڛ)»E=~jxb>5}FRqL4)6Nj&Ep{4R J;-O<)niG&S=h ieH 8 @X2'VjbT(F+sx,<<vڲg'i'/?Kfۑo >Ku}?HF孯X3m"Fȵڗ#CbjQι#imQNܯ'E $,Y%&$*77lbNG8gh m\ܢw{J`v|o}ZGa?wq\^e2h_^j~ŵ6Mp!>Te-sz6\_q~>~}_һ7C_be/B܎Oo~ʊkZk)s{w7yA> 2Jvz3+K_~W^;/.)9cT Wk4]Bw[t1w{彛\[JM9ׯ_=LqNGWT-z~NF7Ksk>|l_7ՉJf=W-˂PNkE=F(ُQܫ¬%wrZd/gSզ&kmۛavW;ΠSy}JVjqar,:뿛Iu 6LtLB\=߱:z{l86{9l `~aV{Eg&&uY}M+61F evc+QLDa.oؿϘM wug|b *#d <5j1&@sHyKHplЅ[y ;=tSM[ʹh!o(Ӯ<9&dLyo5 S8(Ƕ"DSz(́!]S@j7{v“U螣a.t@ATX~M{ϊE蠍s"FZxlcB(QPyN"\@rBIKf͐,{1~08A4TJ( c6P =H<ewQ9m^X"%)C$ 8 , 2ڲ9;_ Ih'0*,~I#.´|G}S,𛟬}URi4)*dpR$҂ȤNPКE xAPXbPGsI\FEL1Vz'w=tBl%!1-ϻ<%af5tnDh3֥KNm4]nut$HwCbdv*f9hYI9@&Rp}!Uh#T\b`B2˨`]fki}۳7\F?iXws.lVeܕzQb# {3lw["ego$Z{?0Ҵ{x52_7؉qiinlz-ͨ`r&"ܵ௕Vn[p,#YZJy8Kk~\z!եVaa"!mqei @E*D "12tV׋2ֳ7y`k9@/IZJЮ1QFe:%*EKhF;[[!ǽ39ռ'=A}3XVA2S1i[5v;SH6TgT f<ę\K]L!^wu_a]bnP*mD4K/yW D|Ϩ b8(XP `(9WKמ&b@H&GW %XQGJ%ˉ`5Sx)jO ]7i8+^hS=/tkeV(bN=* 97 PPKM"4qR҈TBG°o-^nq-310i-y };]w*H >,Њc=KBuDA.?JDx@k͸W?8jN $8=kHZGk-(zc,h؄lVFtDuA&pI&P3hFt'H >0{E#(˄:U3Q c+n}-L&07gЌw*{Wc?aYYrR '\(!9N'z1r1]Opa.z߾}tÈ+y:[ A|$v'D"<ŪŪ8ͫԳdlQ49@zɻDm&=LjInuYzA?)#_?.qB_*u ]%Dk{3jׯf2,T\LzWaV8S+ul1Q 4\Qenڜf;ŵ\T>z샷6 bnn?k+rmW#˧IoyHFՎnHFkYe_qT Wb~vݛd$ry$F0/_޸|%P4NAq^`jS{C_0njho>0vY|qmSK6l7,TnxxZg^:(-K9)Nm~3vw¬ߡ X*J@r>^2fʻr}Yo|_lGWH 1FA/rF{'Dqp,Eqd(hŽE?vpg\j|;En#pݵKɍOc|<&&j^l904EK 3g:[{u~xb@2p{,A|r޵sNvBvI/׭UmqvP2yF9$ JMBCPkD*%5mC 2ӿȇq?6QRNX8R2RLF'IxN79ӊ)rr'ZBfQ32qKיs0 }'/L`}4fhfR+D/JeVQKa6 2)huf"GwV8+Ҟ!mcl骙SBkmdt[lk./Wx0[#kϷpk2\ގ'bV}(35H3-}׃ӯԷg[qYat2tmzmƯ @Dؚ`j5=.gH6Dvլ:Ė[n=`Csކ ciqOH!߽WJ(+p`M=7ʼlsqjgs)?^t\ٱ8šZWq\Raql>_͋.2~"lJ wAF4RZh#Gul4-y9cbТ=w&N#}=(N3;qG.T2:$+DXr4 ЎE\"&Gђ!eeQRWqHUƹLm1rD#_oO3{[/PaLιd3…[6+  3- lQL%i.f]p/7qͬ`3_yW0aNt`dj:fvŮG®L}+\ϼqU4T}:X(w@V#>SfAn3  >[sdF`Yj ]7s7V>`/,١n6(tN.Q1[ 0J5OQ*z Ò9h`jP#>ndжH3G eg_e\" Bz$HɯhvR++Yם+͋ *_KPiSD Q}Gf^SJK]~{y-9]B<. s.bP+tRVLD1mfz^I7MgV|]\o.ԴNng ݆C_Qf e8*MK$, 3~3 )t?8bwY-%v;.FrA,zgҊV@Yi}F VW\E\j=uqUlՋWjB>D!sWHb'(T V\*m^c',qUWOv߮qD*<)q%jڶW;z)]ޠJͫY6Ѝƅ-aE% 2dU.rf^P\ˠ)K8DPc̩\!{F >@<w3=?tV.Ap˼v+CS\)wxʼ+tvZ `mxYtYu.i2v|+҄|7R+6S"UĺIr  (,lf_;p1[n?*! =10@Rz$`Y㿂6*jB6*Z陎Oν8 Pm/rz2|ꔺ[8PUw&خ~A)*Ng۸ȕ޽Wu_ QӢ>zf4rsj"Pkamt!!ۓ`iq 3rp*)g)9LH[OQ:`^2vog;V+*#=@ -UYeN`c$1KoIdZ]u)4tI֤ dNj UeIx!&93 YD2%F[h 4q*OCV>uU1 j,SAe[E[h rciԫ.^T&8#i0*EgRRNI3 c)>RX%cieq=gGZO:k%h&栄G΁OW(XaA9- Ϟw xn4_͎Ȍ gddUf 5HSVr|E\>2Ğ )xK咩X[2$9t>[ Tc]7`ګGWda ٫wb{/j>'Sw}Tm#.JG FF 4ڷk}yN xgg'3prY;e q%3=I!> 45'=lFB]D m=X"9d ,rzjP*va5lw6Fv_\t^ ܁ D56d:c'W ȄωP\9qygϬ[RR)X49*q 9(oRi4"r% ؑپ\ }GԬ/?p9Y& 5L ?Qdx4 h߂V*::ud;fq˫FP{7Fp<'4omMM"D$0=TPw& a)7h =  Ag 2+e$M9 !n9g BJƱku),eqH[\R 5Ƅ̽SˍItW Pev3jqd9|$8mVccqή^(|HVJ5}cASBgsɕˍB-3~P)h{޸Z??xSyoob ;-fkucbru c<:0K۠iQkU P1+xl1 rSŒI" KeWqg3 )CCӘ1rl(G%{p_?? R)s/< Supw7lv%&9D#6h2()]LxZ.L<+Oh4^;l6l8V fŮG=`߶#cQF!,NJ hk"\ũd{yj $ÝL->My)pEeriHWMh;|&g#/3K$]J[H&D$w2~"Q|uF\Y$Ji $/М M莉A6K锳= }Բ=b{-{26=n`kI!b*42Qօ8McG,+רS"&Gђ!eeQ|Uf*'gDb&Rq.S۰S1rr7.h97^GX`ιD4Yr%!4#rcT`xȀd rd,Ik04p Lֳވg}auͮZ·7C{Ǿ^:m?ѭ/.ZY*ni둰VTS/N?t3q*O@>d,X(jbsvg,H"mM>L``Pu粔[sd,x+{C׍ ' N Ɲi^Əs( >#I"/`H}ȃGL$8yxTK(R!)9>7<$E5-Z>,[bU_]*Z=Qa"xLy&E n(*z%x=||txcjo6je=:3aDŽ*~DHGJ;ΕבZ-tSVNj^1093'ǝv|VGt%9)%>XQGLRir )QԞz㑈R)K@c8I_pɍv`tqr, ݢyTXN=* TR4i%mQ!A AUyiAG˕dؓ. 6q ֒ A3:L2T82XsbVZ[G1cyy 0]}vf݅HV ³ZkƽrML#rJQCEu/8`Du4JlcQY+#\p.XdRK2jLj0J4;F*Hmu+%J&PJ བྷ 1U9[)(΁.dT/c3)?Fvx5iur|sL{6~?>3%??ߚ/&k r^>7zݟnXuTF6:_dۨmkưMS<H>CvQe8'7jqmT!Mt6BkwϿuջ^~2/| %׸SS } X |4zк8|hnJzỌkK.e5#dV5qr>/|ѲMZW ({q+8md&U .Y+ ױ G%"^ǂ{o m$dvOlC^UsϿ p[{1>yGXh'k,)f"fܢ;"w:<|[eg(F dg/_IQ`O}+{>fe[.zQ9ZՅ2s  SItT2e^Pcp[*BJ"1Z(OJgI[l:;jfGThߪCmq6G#~a{բZ&SkIp:\M>Lp]n}/31޽ʹwuKg<ɍS6zuݶ땙4oH$m4+l]?֛Nth%7f!e5ݻݴl󙱲C+-7x4͗ 5BmvԸ_ԃLهy.`U9_7-YNfpm>Nu9ڥ9ycqqEֹɕ(Yu3HdwE Ʉr/ZU*D%MH+e,?ƽOi Vzs "xy2)b*) $˹<j&*s)6BÈ=~!pBAsTK[zYBroddXTgR1a|k.QET郟όE4%2+ƜTJNUJWe۩Pw=tCD9AM.%O"''$`.W"25S Ǘ}+"T>svqn6]/f9L}Oq^6mv3]^JO] 5S2%KTi *It.XKs1$__"r&t©R*Uue0`epM&0QAqLCk\l:9x8'+>[.'6]N)ۊv,ɲTJ,FÀˏůtҢqٜ$ꨯ@F#=eXԸ0r|@wRR+OO>*똨 GO ߑz /y>L5 DO2[Y ~N&}ofdkRP)<ֲfԂ:(}=;۹7'bq0jOcxQ3ݜHG+"U"[ymd5\TϷvƖu4eGH/D B"^ ez φ:}GJ-(!rltl.{POs,FcBuxaI"=äwAM3] PջbwEPMm~wퟍ8pb4)(o=&ߜ^7QgpFs]_f<\izM|[0^ԫ銌@9s^g 0-z;ǿ >JQZ};o:< 졐o}- [tUp[ TV|>Sn*RBn[1!vc[FzḿJ1>wgVD*9%1U K10!t2zfI"DVDʉ2О:ÖJ[[wK'wۃz#N_3Ww[s%kR0ε»Яv2&L0*Wg;W+LzܬSܢz(p<\No{B5Zb7v>z/8xZ9A9ܸqWPW>o:?6­^;HIG yy)a/f. upcV݌2%vlb̡ejhy_Ɍ25/NAP-K :6^JRZȞ](${$JuR>K {e$q8EMh~RI3.QɤFJZ"PJ  MCq~BM>hJH˝ީ3*KNJ(x120 &]e.*)Uܶlx&R}JSK,Rf`ԙAF%ިSuEіAT!$@f|`I@xd͸k;(/ךPQMg;Ÿ&mR 1d.F'{eҾ<=1eO+=yQy(m0sbFPM,I.j .!MT)=ɷ4ʘ8e=L0Rd=C(UQ*KRt#c9]}BaA^Rm֌OOp<|0{~ Fs Gl;4A4ЄPBJ(U9x`(PTs0DŽNSrPfaD,6=d|Ũd_\qQt&Bv).XE t\*#FYG- 8ǂŨc_kUaS\Gy(I4d.8t9okfƾ*I$Y+Ǩ81\\HĬ뒒;{ߟ^n3w=p< `]mۈ`8=(TB9#z&x$QgEgwJֹgt?}bLov3\mMh*ވM??2 7r @ T(MaQc4egdC T.wNy@Kq<1IAD= )'"آ@H٘BC"0PfsL"srcwjO8KNPJW*60#BzH&ɇ.rw{[l/>\v6E Z3<6"IHA2ܤ"8MB*h$*AvZ:xR0;)<2v =4QR+$r#jFeҊ!mubLzd@,!O LwM8N9>{g[=JD.y)X +ɭ4u6_KS뗎[j³% mԣҖ]o 6IҗJxfY(\8GP22nc$Y >h ieH _ \at0x)nsFwMcM:ٿ]ksܶ+,~HvM TƏ׵N⊝/\XS̝ť 4<(ihIl'&{fFf-U.<PK~*s܌q!3%" -&"x]L<(A)o_WBXQWTN!"uYQ)_E2Rjj1'ϖѢ*8̰:e<|-!h`<(z` }Ͼ6??Ƈ7wV=I~gU.M7շݗ_^gf|O||_ZЧǓǓOqV]+7*r$u}Gt<._?{كiR&?rRpUE(FUdYJߖ#征&^ocfWc__`yol1Px8O&ŪFU~6Ӽ |/T zlb~k3,KR_? tqF%rA\,1R *N8]R}ʓmT~  ս^߲}#,.TիU{Eֵehj䑙`,m–l7I(Uk5? :[a]L6jf槯'3@k^F|hX1~<|zT?o U%L{8OㆎayzJg|mv(6R2wJnLRR-0ʠL5ƹa^*v:Fg~doxQxP0bkiwaAүVT\5sả.?ݾfR!b<8P+R* 1scC2שּ u5vľ>Եle L#Z-c>&ޭY< e E tc:| }&]?]๪ÛqfIڛE#EX~!vbc\O~n2?A$*c8 ?rGEGe¥dv`ejM\_~p}~hkm]v/ FEkEŷ$0wEpmOrv']oۅ PLв}'ߢnyRýGrnn$FRT>(^P Tu7)Ȉ͖  y˫2)J@|m,֫u' Hcr>YߋbJ˳4qt3.1m( hN۔{lDI"O2nht)s# ή?6UvЇ*>rȇ*_^_Q6YAk=$ x?3LX|5fˈ!XC vq`j!sl&x#iojfjŨz?VÎH0FVک e4iq bNpSJ`$'V"Z"Ā6#2o7\@ ,#wl$$8Ŭ@_t5fKὶ"ؖ37Iwۖ| |:aH1 Z4kCZR"Zp$ X̹7V~ME&!jiY;c\]h}^(EME+? %W$q' u(6ڝظEzl{qV mxR<}q3<$f0:TRѫwo7H\{\ܓgvW=:3_yLJ`5fCq ͫĩ;SZZLx!]0ukUD{[鞮!]QI uspc]EJ:]EHWLrq*3 hztHW\jMD+|7vpIg*}QCyOWcC^S :"&n{V ! whK;GʈR4(i: :pIl ō~ɁgNL=JbI#H/Iu»:W;:Ojs2:ya~ cz2&?@s(>3GIaίtB3fˑC.5V׶lO/^40;ǎY2܈.:?+?,i`(W;>nJd=5 wZUQ/ lV)ìsR}rC>I4LpH^_/%/|fhRT/G'խN 5sW-}}/,7,gwQkWD4opo ߫1lFÿ}>"w@wׁ`Y *So-SoKR(R o|S{^ո0 NNg:p&F1ʳL,i:ǃͼ8>HBe'Th嵨AπJ4q nE3H3i?L4̎Q J/.tBY1]&LSj4!x13ˌag.iDRׂƦ!CJ Ҽ``;h \.;_)t9P>6d^H?OF3s|wS7fqZE0yݴK@.9LYCn(tM>yIu{W55[WNFCO'W'źeU7i ZK AцDe  J 2zMaAzmMsc4? DKT Rs5\{ES*Ъ,s:@IϱvgH0AX1iF4Q^JpT]1^*!hڲgC=@rA|IP}&*UNX2c!Aepv$Ψ MY{hS>=$ x?3LX|5fˈ!XC vq`j!sl&x#iojfjŨz?Vbt2_zS:vTǘczR͡Nox{8V\]1&hv*C~E"dԩ=؀!) 鉣r'm߄㍅ Ǧ*w2:?hc-X<|ْ'r`Y^&&4 [. EB;c6c[ϴKαf;l7׶VDrz~(ך| |e}Gе/z!Q ZeA+PE<7jO,+BߦzX&Cю$dW -kyyXHI"%==UY}]DIvGsϥ!N~/5TͨI&-g3 2L`Éx{:|JrU,M LN/U/UCb?*:],s*L1@KFO(ryh17kgp0HHb<-+c"lFCha"CuϘSe@*%㼲G|\΅W\C߹Q~cܹ 5Մ'pz: cSfh0#NnKdȟ:.<6ln8 n0jX;)!p;=9Ku 44ZEsQmiX"ˤU.3PdZ)Pʍxx%V"  -Jk%D4=C,qs [iroO>?Y!O3t@̞ 6 &3eIGݒe[-92ˉXl]MV}],^Y]B;nܯOppg?o˗g(,7ֵ|< R~Rԅ8T rp8S+ƜTJ`OUJWeNjJ`=]O}D|Vks R('F䗜Z$DPO .gC~jD( q±huĢ DW)ML $N eV1'8ÁHT ƹDlaYLfHB5ӭ'OǓV]?lndU]i4ʝ>4u9k[O(n7U12)e)YNkPI*#wZ*D|$^i ʙкKe;Rue0`epM&0QAqF)*8-x%x2R+ŵk_߷h{>EQ+=yLP޳ln\l7°ߦVp(!J#ԕ\Vӡ2Qˈ{t(Q>w#RC)|Д8;'SSE%B'c%gy< WJF5ä@Q L Eg6zAyXeR"u "wVfTa:ZL[:k!{>l, T?ny(1Pcy& XH3( xbZjN(]PAm - ,IXG#ьKEW^F2 8>Ԋ&9M.*l|= Zb0uKAhmt-2DEAg*e"$%#i$vJ*CFfHzсz, 3A!qDc- 9˔v g3N#L&Ƕ #"? i|IS %M`8v43 DBBKa"5 ]Rg|P`DŽNSrˆXL͈xp=jk:iɶ( 8i< RUT A'ͥ e%'',W \<,,r;<-wyഞ#o)q_?0:zw~GřţGR& ZK=F'c{~R 1L, U\9Fuʼn1&*CwH9CR=;>Ά5i~rh0ɻ~;H^fsLq)CT{*YrP:U1q6GO7Ayx5z~"nq`87;fwKWkjyy<c$ DkRDmM&*h$*AvZ:xR0SxFهxN9 Ϸ>}2?=З%)ks^ޯhO 8r:]|K>QiچM.rTįg3J7'W{;Nǃ56IYNv6KO׿ѻ:7ELsR͋ul_-RJ`skeXq/ʛ&Ͼhܛ ѫЌrn)*6|Y^ŁRϳiZYlF4e]A=/ [fzuXI׺<7|vsӲs}.TzL{흲΃^BQ7c+wap;&nh4~-Ơў}hKun^=g CBcu+X1S$v,B.u~jon ڼZgwO:Wiܳ@mn<蠶fް c։3޶<7L51NŚM|]YZZBXK9mV|LTBW+^m룶qV!S$P1DC \2G"wcÚO(o/FЋw_5Q'|`{I`%A xo+rn`QJD/Wt\MaVžbs{Ux5G)o־t؛}O{ ~feqJ/B>2:TRR\; B:c*!<Wy%09e+6kov]_4;?Y,DrMJ(0.CITZX h",eOw9n %uH!<|@Ώm"&DHFE+WE`_tR" }#_]X;j*3iwoxTujWir}Cm]4dqyk_}z.q;g.gᤚAZ՟ѲP‘C-TUq}%UT>b`B#wGيnK_ri"-~MrNC*ϵR{O~iqW-p/]ܷ2&:zJ2e0Ͱ^&'_c'(j5><·Ykje]SlkIZBVޛq.MWÛ16kgLk~oEmFH/D B8K& HV^z[%&sl5XMz#=$i .CtR+eB^%c"tJTģмSy9s X٣mrFjAx G* ԀUzetGb ꔕ@T<>8IX72;ϗȖ6IXg8]IkqH}89׬J`2Lb|&|ZZmiGy6yM蠍sU"FZylcoFu]iA):7}v yۦo;]9 >Hw,Pމs1LH&'X8!RшKäPHB tmt8"0i5¼hI,}Do1qnyM Gwn)9r2H5QMXfreaP\Tg0EC$O K2R hڮjŨ9ea6!k؍KiRMF4(*6(U)Q :)G aD8c\Q8e {}y_f0XU1B|均eU`mkl :rE푖a; L` &% e<:c9 + %K L̈d^.RdYrfmFrIʂ2 sk ?hڻ787 3I6|䳶 [~wPn/&ԶP}`xkKr/=(A)j SDJq=np[Tqڊr!:46h!j/&yE@rJM]@x[^BP=f#<\0ϭ |dK>ĥ!gXAQIbB9km@0L >INdɻ;^H@Ik7+u,yD [5KZҙmkɾ޻}s}ף?Շa?#e,3bj*+%UHӶAn+x9CR3V|qgQל~K:A#XH22i*iP:({5PRR2anS18-G$J!M-ZȑJgpop6t6wu n?}Vf~yvvn8:8AZ3n8Yfֽ_=y^x'~( d,n|̝)꯮ypt;\…V?r e&Onzeu2MlUy6'PLE˭͏dkzl]ɥjsTiQ+/jֻ(#OR I';cYqckz┭wOzߩdilRpi']2dt$/88Zrj D۫^Db(aQKƔ7>&E9&\>_ˋՃijkAI>h$ |̌ F8'IZa$r#ڽδI`S8d^<OD6۞}ŖL[i.,c u3]dļPӱ.ݸa2`uh+PU譮lήqRo>1r1g3Uw2<:xr j>ZbNdi҂;lkC5 T\b_%bAN0J5COQ*z㇙|txcjFl|Bۚ0G2ae+e;e;'Hv(i[im]3͹8LZ˒>M3{=ByYѨrR ,ĽwwyB) .igyQi/H)#H˂'!Y ފ3aB LwdyGflzJ ͇ò|A+VTF$+%ODCr#*/5/2lt6 h@k)r@m@cY(5d1H<9I YfF$m;4%CZ;\ G?JzIk +G/bA)1%Du֊V"qƙ46*ke\`H*MC#2)-@iiUGu+%=9W6$ā2 ^f1◗rHI1w] RR>~TeʻpU"Á /:Hol[D- (Ω?ٷ+h^ OVpQvIK5_s#ULK|gqn';ۋ`s/}٢< ̀3. Mpt{φ#"+!Ϧb!IUsK%f&6aZ9ޏO JQOBnq1Ŏs~~>wnL"avj1 ]\OުȔheJQݴr`µ೦ݦ?h#ݾQm]gMҝx9 -O~rƛE04 ̵^eݷvasӿx7!un/%|UMukBpM׫LRt?X͢L6͘K/U wԹɝBwnCy"~X_ %_޿}o{yoowz`BI OӓX_U7?nQWTߴjap}ն>%W'IvM^Qb66fa':?}봁on% WME*tݏ8 i~]t0yB&P8p#9O&%<m9YabQhm6?pk/FB/r&}plp,EvQXIɗɓ4~^۳o6L)Ϟ/ ^EdxSvӈU7g15$$LCz-0MId{Lg/g:4MSYULpf:) j#z%wL C5r!?P04y !@ӬŜfP+j3)0q")$"Oш⎤}nhs!-7m\核&KZ2ޣ * hPݜ7WBeVgv84.)/7>xw ˬڄiReޅ}O4/iIMJwX,6OeF8Q'S oZRfj$7.(]cpπVwFxH>I5b'-ؓ9fqu$~`یU]hJKgBr|`5 j}ITp,ֺ[7.m|i"%h'X'=h^VIhC2j(UÆV9[0f %xD ]mY0H!Q!s(Gh٥=6*˂Bф i.ёO<$X EA)٣ԡti~mm߭83  %JJo*|񍛒^,X]A8&_.[zP6`Ņ%4<2=K+5Rl{|wU. )6f=ak 1Tkt d{z] G6S ;N-`׬jX;M*sW  d#I:, |FEEt(MUlA1n2NFhmO J+Y1v|э!ՠPw^KCp2P(SPEP C}HB€(!2] 4HAyը |u@x;b̂GA cG!!.A D5t)Ԛ8B`@9|Ȩ nP`ɦ= p\AQ{Se*8(@Hq;q}A[Ty'D)J6NZA!N ƺmu Rq*.gZ2+ RD^bKRtDRH %5! e 1P #qҰI&}EՊX{ɻ Kz/>bA\?1,4WU)*K; 'Q_r>`x{hLԳ~c%=eg/~TUzۺ L0B6#&F: 3x:pnP\J?l:@C6~*MTQo#f BX9Y4<#y䄰 (`QBą򠷒!HDNkk*Fʇ`)C#`8ӥ`)xLPBhb2$kuk+7<o 1t>A,TG7>XŸs mG5j%eD!v"}7X ?$?<ggvq'kWمX#W`>~ ]{ #Aˈ|uC.mAmE Qw}T@PGQpu $$LCQ0F4Kq[S-Ah[Ry !zbA jhjwXb;ft ہaryB4ΈI@G⊟۰,A$ fZ&ה!P?Amj DQ;Pf{02,Ƣ cA8gPH!(ƶDMRuPkV]@=`5?!=hˢ9;kFFP[҆wmv=`f=. >>bDuN M~g!G!!o QЭ bp\x.*jJ':Tivgu @*cKULm~LG[Pc@sກ6[[1rJg jf=Եj@רM9t&1jGb+Z{Ok[rл[[<gPqab25ěk΁ >bѪxdPi»*:ZSDàe [̀|12 =pe6|%ZJ~!(2\/Pm|~7_<6oxm>.VR.\zs r[]g/?\\Wiק׿zt_Bo5N6ƕ~~{u6} _5^\_/O/7;T:yvv;|v/z|v{f_-Nз:}BOwUhM!ډ@Nbb) im@8,'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qx@V@S8Os&.m\rQ:Q>X@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N?hagr!#[3pi'zz'P(NctI@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $NuaR :KMt&q'В^(]':·'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qKQ}pi}0|9/5.-/pߵvu~S/nίn18Y=qF5qhl\qQ:9(Kz56abiZWOWkLtN\<]1\f+F ([1ҕ3}LJWOCW wrnkX0JUIʼn[^? ]1\3 ]1ZeBWHWDa3~:;xhBUaxC2% J|g v?>8^\Ю/Osfh|{6u_NM/u_b?ds"G7#CCq|k)z,dh1zAt ȇCj֖rKm,|qt. ݘaii=՟}U0O=Q>ۑx*y qõ,IgNWɒcTAL3 ]1ܗB|?iz@iUȮT"U{~p_Z]ֽО ++t:5MCW ׸YѾv t$tuteF0ͣ +F)k;] {k})^(:Fr 8yAyvh_;]1(1ҕʺ4]p0qܘ,tF󮝮Ҹ$tutE~O91QiM4q8>rZ}@dL;Ϝ*(9ǨQDW O< nRzbfe BWHWY;z.]14S j9/FkV?E(yy*~dW8R/~%u/lbp?ae\=* ]=뵉 i4 ]mTnt(:B2Vl`4tp*vb]!]YKfRW1}QC&+NNCW NS j $'tutHy3]1+}ѮBWCW1j"b+jAFkV?3() ]%]yZ%zQi">p+FvKdțYN/K[3{=fxtGHkfSfVE18e46ܼm,~nxpK~{©_˭}}y:2diY_BSU/j!̒]q}~^ݕk}_./Jg}_V?V'ʯ{o/o︈x_.6oG=8oq|vޕm$B,x@ K'X`AShDNd%$rL٣xDfwu-b:YQ=H K;VطMEa^W㦣)1Fi$^z|GBZ)Ņs$.yiU.c 9xW۵KlkS`] 7Mvr0tՅ!\XG"D2U%XEә kPRb L4ӟ4_m6^S C-}|S·eJ7ż'$0ȷm5Hn4]2{5OvW}cqy>ZVj40̫<*fs[l[ϼb+5V\DFfsBHLTdT /:\a`چB.әHxR>U ދTD2'c:ڰ4N+  B' Lx=Ehr<>늱kPvU@̒x_|$`EFZkzTϢSMM$hY{5S1Nǣp0D41E X="hHR㫇ը03} hΡVp2Xh<*ƅ@xBz/w.D !͵f.Kx7(bb%҅[Sw|jb\aPgvQד z2._U= OUjMQLxYf]-]-WֶJVgꦾRU]OFR_ \_W!+c`cOpzT?W~UaCt⧶QhrwW/^~*|?٫g_8Id ?|}j/GӲYTM6#^wO.ohw5dvSnc@f^{VʕpEy5C0VŬ xׅjH*T %ܕo DӴQqdЬ/5 M */Ӭk-[w /Uo^ ?*G H p2/'#sSM%8X/% :P $HGFM6BM =qM5*דT&Eb7ЛNQib9>3}1+;eNZZ^H{!)t_Dhe)%1Fc[a*̸MJ ^0;d{QYӥYjLʯG@Q}w2o5սI̪fNfpPᡖ?pUz\-r&0{Ǒ._j&u)Lw@RoCu[֥lgLis} !ivۿ=$h"!Z 3\9y*u8\87a:XsT=r[&g:#"!:T$-A#HHe2~\ޏJH-P"k' aq=KL[;@s  R΃:o;cƌD佖豉hj4BZ"z{gOb?jPӻ&܂:]9H 쭭u3WAʓ;M0n޿rR7u¨mBn*eh|b#T륪N@ 㭭>ɐCJ7Oa8jG2;RFvh6BJ杶_S͋»+QR0--n-ΗE/(HoQqM6Dk|Vpjt~X~UƎWݽDW e2۵'P4`udqn6lP*TU.._m?C| bhq LCЗۑ&n4O,$DOp$BOp`|#$) ; y=XK~m.+MrP;/`v{sCQJ=2Ks-=(̎G 6bzFܾGz@A!C,0HqZlńD6.:e`E<5HBQh,DDꥦ0+Cʘt380ټ;#&OS\R`(6 o&ڇн9G (>v&'mM:Y~!m2o+=XM-0ۊ@Υ+A7Eu u1I^Q(֢S\-^U`̙R}#coF|ް7x(➱Qpeg}< %=Uwo f<.oǣ#Jl ip s\`EX+ƄP@9m:tǧu^*`0 OP` yI1"p;t$LEt!"'Uψݛ8ۖ_ڽiCQGv[FkhRGEFټЂ33;W<\Hc N$?jb] Az;P[ר){'[|4tM0&k&yƑ,xW-.g>r~B=?P̬L?kQה-+:lI9久U݃ۋ!NJ(1,T'uͿ.t_uyMulS2ƪiY⿗2dV]^pMeym;gX">KLƈN$ ZʢB$~6B%&H k!G3&FTYAn67-.Mڷ4ܼ'{hoL 50<}Db6إ$ЁFFԥ+FZF6~=- z=hӸe'Y6ooʹmNmؿDEY( ] `} $ 䥈I6K7m6&migX _mސ~_g#.˼L D/nM⪈`8JI(]؀VrR>ZL(0le4y)Ŧ bЁDQ \Ⱦ /5x0$c-5ks-e8 QVH)С/|PN)muxQYBY8BپC,~J3 mu%$+I D"!BM%>;Z u?]J}zr Ӕ&NiJeru/_[qWb>wKzfL=UbNyWEa/u0G5;K{I)Zs:5&YI>׳I]g>]N_}ӳ|_'wʍV IRtP;t.uX ǕSCf;}ڕey+\{tIhYLd+RqJLOxB6Ttվ RX%ZOZF@U70J颁:-٪ԺU3qhf}7Iaa7T2/H (/tgms/oQu_uyRJ0 Z`ˑxnU WLsLG={N%_$OoXϗmqi.C]8?{7f+2ֹhkI";M)(l28G\!!J aYD"OPAP}E#"&DwO 9L\1$Z4n g==Mks{_/;z_sCLGcdDFydٛb6:F Q!dv RV~tdL^J*c9`+]@'zMVJJ v)Ylp:aQ*Yza'ei&Y69`ȡ,%([]f @v[}8]õRv1jmmMkOa#,\0!K3'«yp{DD{CDU]c8/:URHDР|Do6oDWeK·ЅlB9.ڄq8nOB;&~\|egGW%,| I$%eѐ1I>JRuQ0" 2F@%y/Lo־DZV0Zf_!!6T c!L4$9$>ж^$ fI @RDVsThtc@3q(4bNٵFkD*QGml]mחHk·9{)LY!Fxtmuw9=:B!}0t11O@bwl$WFbT3rXDr`(9)7" Ks}iQkp#VM>k1NƉ7{-,sx6Jx?n(pmTD[wQNjwjUM.*q+GT4~;:uc~b{=\m ^Mlo?O>o௫'ڜ.j׵Z}|wPb֡~zA=|8 *|ڢFTjyHНMOZJ=B-o־tKڎ (2h@[!x::Vh`% Kڈȷ}]Z!SJ6ҭ/xfdP 6~ȅ` ү%>QNp>O}p%Z>ʛv36AT؛pya}t B9#ן~b*u<%=T9WMNu?8@ePlM[$mSਈ,shdQP"ˠq`&PD.JV%ܩ5j& mtx:xׯ͋;NnY~ +|/@fjf?&I8J\Jar XcB&/SdB].3sx~~;GcXQ%{){㛧oj:EN`g^OL/UJi^TnX8ҋ\R1WU\sUfJh^RZ\]Uqޘ*-+U%%+턖d\ 徘*- \U) +>j#sGO0X5{cFs͕q^#sU1W,/JоJih^ह@Wk˙؟'S[1?{בByH]}#@؇d`׈0%2$ewϿMC85gx9U臍,M O>t'֜(sJnͣyƹ]H)}b>sɓ1?H90gNp0CvBh*-q;H6>]eY*}tJ(rueq`Og0w_O0޷J ր(BQޮWɭ/ҿwGW?^\ەw{Q{yNg/7t_"ǕwV_*wEmUk֛f ֣Oee gas n*FX/^~z>>'rvؘ8L&')(-,zVȕ3N.nr%3;wבkj;kCJhJhAiY=Xi9cr5 y4\j9w".YM+ZMO(+0r%Er%]@I-rur +}_'5QJ\ %EP ܝ\xCp|DZs+z\\1w#%{?NaQJh.WBii+lY$WI0CBkf?.$(WhwJgfPp0cWzf] \\HgᏂ| rxu߄rb~3 Z~o`e /h`W_Ӹ0q3,77N +9Is@rYyr%đdg^p} $Z8wJL;rX9iӀoDhO\N43;5NO+s++-hm\ EP@rS* #Wh]rfu.r:re\lޝlF7Q N>Jv\\1lϺ VaJp0cWBxr%,rure#ɕqƮ7Q z_Nj+Ǒ7Пzm|V*> 3gb /hPnT) 0 4#*a$Z^"Ӈ(ѐt2\{xO~uaƍqmdW.ʬIkr0Mj;ؗ2 GxW?zc28Pe3cZفJ8(r%_)s;dWH8]q~oFP.WӀݾ=Ft4832,rܦ_ɕ[5\q(r%A]r+rj +ҾFêQZDa$2*㇑+}ձɕGWB94\\qtQ$W0r+G+#/+4n+Sz `gJpEJ("W(W*|/hAPNhmJ %-˸%vY= Znk ljj)zճJㆠ#(a7\ fKhGe2,S(WQgƾT؏] nQJhc\i'6=+R}L&Gj.8ifO4J;'/rܦ'2A$W0r%ami(]-r*rɹ2 ;3\ IA˕PjʕX_8aPiZ9,!UI!](r%_;t}\ e\\Y40\ BD?v%AʕǑ++e3\ s+tqC+s^B+'JcPrMѶ%Yol-ʵ񂖉䇹5h]0c5hg&52/sAˤ d)ǒۘ")_/u]cnkӑO%ZN1dE9|Oq7w*P)VlJ۴L/Ԭ'4{҆%}fp߽9Is]_.;?c*??蝉α.#FbԿߕ.R9e"7/Κ߼K K/u=Omjޣ\ MiO0ŻZ7V0U?a򟵔if9:U g߯_{gT˖~8OÁ ] WP 4cmtOT]BU]}0˖L6jTm zC>2ShF՘= )cT M[m}ʹRVUu<'h; *6(%~i.*UA6בblҩL>V ʜr 4]RkPTȁbkdr-h LH5NWJ=hRVZTjx!>}DH͖zԬ*"nɚH66Ԕ1O´ =[z :ΌfZ;5#춸1kFDąj!SRWg/@xĢ ̲briuKzY+EGdvoW5 P:iʹ0 =)E~rihh*kR=^bC- ͯUh]#E˥B_H+Ȉ*3EC]_~]d|Ht%őX +Pɘu!g5} ԜOͭ$W-wvZtN5U71DGI+* ]* T .BU5C,1ڴ#QU DNb1Y\S>Em5R)(f;XrnU]P;YR5\wT)xL~;BaV_f/,]²- U# 5B~sΙ=V&S3D9qn"ʃZ]De1ltFXL}+]`!N@".T-74ˆbwhY55iY70.Se+Ni6 k9+;>TE (kUDP|(Iԓ.e,.D1K؎zn5m54CRĊ Xe4խj\Lq+Q s pMQn2"MJFT"9ΦVX9Q0M®h, p*7#TC++eqCd4 aom"1]Dmla[*C(]g9kC}:vPM;*Ć%})Q2)3ˁ YDO9D`N $YiAC5lBD ' !"vk0LNkxGX)R6 iWA`zm:r{J5Vr4BPR h}j${3y!Q99opBK؄^haAb`۽Q=|+ZCk>8a4i{ <38o#fP.erMwK{_v2b99|1&HTؼ*ڡ;!&Dh|v_}n뽂tq*g`ۻ\>*Ղ^ewA ئBOB[B0Q8ti(6"J߭;@EVLD!]I T*#"6,qLEOaG;E a1҃VsB( =I&|Zex^]ː>x!kn #.ne6ڭxOC Q}Y'1ѭ)Rp!ngpm8,:HTCUC~^n~gUc0ۆlZaTXvX?v?>o ^PT>B",'CkAÍ%XCtI>hGx<IrC8*KtȻK@|̡8A#tϦێ!B)LERhw`aZNnK)!eٔĎf?9^ B׌hw,KihE޲3FKgy+8ʀX`M}070F,YF]-N49CUxP6? >^{_׫Bu:HJ"5)82G&j]bh:[% 39(hA2AC("ep% ~X(9y|hpX0@ur",U҈lls#M!jɢ8KiPJ j2K޼Ր6Rj2+"l+[!afs` "wN:J `ҹd]6Zԃ4֙ nu047=Xx&Mv>MˆsMU&AuW!PE"tӭ26xfsTҿwk0ZSb Iwy. +5 1pPkC{7j |p)!^b,޴9njz(!.=dsS;VnSbKhtw0R!; #B*|(Az|P[ЃVCmwԫbXW66I2O WU.?~\%FM!rh)bbYa23ߕx4-iZ^8bTHDz5G? \h8pa~C^kJɼt~/!{8z?M얇Ixwb6LxYQ7)Т2C\bK $f8*8VwɚU#'s4.j߷mp:'p62? ߾/l1; eqYCGuvtMenp_Ўb;+X0Eo-<}|r?v 7^ uui3}-"y[ԋ^Zl|pm\^j86lnt اGϒKww{P#C~@͌lfd3#fF63͌lfd3#fF63͌lfd3#fF63͌lfd3#fF63͌lfd3#fF63͌lfd3#~?6S?od^mzϣuE3blɅQnsjׄ?*eY7\prz<^T?I:lW/O F^'aD6? )xj!I1tОeDzxun#=#E'6mXV&=,k ˭,~\Ie =#`l@ػMn o|9ȩFN5rSjT#9ȩFN5rSjT#9ȩFN5rSjT#9ȩFN5rSjT#9ȩFN5rSjT#9ȩFN5rSjxC45phy'PzCNrKON r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'uyf8%ph-y'P:FNgc&'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r8.O٩V<{5SMSwKrjB/ܩ<_λtqLрKFC1.Un0(%s0.}R!t+6R*\eBWNW%,%]iw+l0UE+v>(%mʰo@tU{;\P誢n骢$%]iн/I=Hm~ë USn;ы^jK+.pNݛNY;۟_~cw/;7Zw~zXua[N阅%ʢYX˵5 c۔uNj7DWJnilv)%^^"dZ<=J꫽Kx.oƥ~MX|{HgЖ&&<|^ێgokcosfզOkEOjބI9[]TިnHGm]v\ 盙f'luPɛF8'Xlba-~FXT݆t[uW>`vR?+lƢ^oQuPYʠ*/r隸`+C ^n< ~L>˭|C2H qxyzVilɮ*Mi.X읎Yk."gn[D.:jU7Ty)c!=ƙҬȜl[\cn.]AIP2k;Q^[N%yݵ<"F?d+.=éu_eY>Zo,yUχ 4GeF}-Mߵ;l:Es^|1~\~;{qO\H7>x3G-7x|M3Gy](陸K-]@UDf5HDiKBDd > K0PJ(-ƨf2[VjƐ+?mޞk܂f+-cq*A6Q΄ſי4۽59h7'۾fSP(Kb.rPNv,kEDgb2)@ٸU4)geK2({`5RDhY6rJNRӶ5|^;6ZL^-REST;"vfC_>K뮹xޖ곝7&^6-/, UF됋2muB2]}k\IhgFFKw+w%6}aionCnbD_, s!217ķytG7p%>XH:!:]+?E*SpWl/Fgn([5/R}vϦW:+"W,O mdsC*3ToH8RH4!dVH2k=Qt6iN&aZ뻷oNY̋m־Q%JSmesWS?bل 2\#{l6g%vUTXaѓ*Ϻ6:!\\¤!I" ܉,ۨoMjRN(eCIKhe+aZ.A,efs!o9)58Oul}CjNNg?$(.;y3>}l~2`u9C~?/Y Um^WȘ[tJNV֫Dh^F:n[6Qf}+M59שdwЙ eDydw.:5s%>7n Vˣ.TNn8g!uz'IsuyÍ{zۭkEIҮO]vuCZzϡϦ|z.Tcjm7(Өm64e+KfW|'bI;/+> Aӽ^y4uU{ko]˾N_ķUh+mX_d7#}vm9%<`ASbD|Ҏs8$E męfwMOկPR& 1` &p£jnDh:vn~|I8;9ޘŨR]9[v&g2aan|dՏYsX +=r8SɌ N2%HPـ9z/:惨=;vϏ t;j?j0m:CH1N#i4sЕy!" ɉМ-K^ZuGBmu$\$)0qZF;ʃp-ay5mf3ؒF̗: D};ɖ!VMU$Q e5Phμ](z-x3JW#ワ76-m@tRdiDOۉ۩ Y$m=u 0XQeOy[yulcE[yƳbY1[C scyU-Ջ] Gßgb|ra-hM#)8Gi41jfY^03f1ඞ|1f?pQ lfUz[罟g=K/F7WxŪ q?)\ 7ޮꢠ;T1Dr7ED>Ǒf8x)7y6V+16>ڴ_D *a,yitdNFTS J>4lύG㨬>fC yY!=9 OpLŞg:i7 Pl$A  QQ^"AtݙN+g:yƏdFo3+ h]@cO,tυ1';G0?PntܹɁgZչ|.(a9'^SUSiZ"+4"|:_N`ƣY0s"%vf bmBS1e:EhzoAC)_WfZ! kÿ'= dk* "P}2 F>I'.*s|?)U)~< ;-󶕊M{sfmڰ_8gfrӎo*%Gy V5/ͧ&m75G^)LFHBafgcL]^p]LVes(hASl{ay-N}y6e-[wvyx{Uz^j&ц껛 ϼ v @[:$>L㝧p42g릻4$*lͧ W?] O-ui7P . LD.}2Ui*qJ*q*qT"*?CPJj 8a,X2ɬw8s^p墥w5K:3YĊRv̜]S`O\ƃKڨ)!1\Lc23YMtRD JNgf;;^~"aV2kP<3LrBIp1 &dt@!ɣ{n殦ՔDTOb<鿫flu3z|ffӫNnESf=Q#-6R S̘yoSJO0nf1VYE)6$PAsgт#*$;A UtwGKpklz @;6=YR7)f,?eHya|iИAS;&7-s 甧":FLa.CwѠ Jq%Nbdnp)ShdTSUF棢BpG"xLC]B }L?]r2 |:c*%RqDh#SvF kBpV 䒂Qw`nUPCR\ ] HL]`Dʋ0roDʵ,]P`Lݭh=s{3E)1v5kxỬGȿpYqD4pARn_ Ua i$74("v&+AO#t!R0ÉV2 ԃ8ОG)bܳ u*-&<,Lk!O+ȓu/Ϛ:gp~fxRO[$(hY$| 8\p*i)$6QM ~ } 2ц␔mD=H)j۲Ťv'%} O\5ܨ*}0Nl  Iш"lD(,B a8E 0!4:8{ڳ)Q'iꇟEnF?73t,UX>~b[”>.D _JыMGۢ2 x\.(u)%$uĵ:J>HT_,|LA|4~~8{n0SzWZ?WԷUaѳzW?,O(^Bybxo^zQYݼ_'ȍ[ݗ_?}ݘ|:q^PKaNOFw7nxκozҜ~ 7-f<}eWg16yE2{}7u7f~aiK?qLGڗTEVk?LFҥ_=/ϥ:*% n&=3k'{Z;.!"?qOɎ^JC3L#h&]xqPϳY[xlT3Ԕih0˦Zɳ:9y*pwٳ]g{EE^ͿOtuy߷]\&nh4~mTnÕl-}{f@ ƆA:0 j6Ikؘ=T5<0IUY0A;՟!c ocZMɘk\]fUC1ʺ9JuTaN\t /&__RO@L/#Q$ :PcM0 j]-{H,ߍКulP+.Jq`QI}w]N^Wp%GkOf?Rm(_BQ˂M'!C6Y8h4EFuNV)cs=L|( nf:[esA}5q;PDE]B"Ӛj1@UBS]El"4}g(uQ_ĬQBi.bֱ/lf-ynNԍd0H9omT*Bok{p28*U>v%}f7%-O"0)]:uw2z9WSadI\pbBkq._l-cP)4-#P$ s 2ȹtس\.ؗ*opR.2w&h],8˫oԌ"(EՀ6H.$IEΉ›ygl&Խvɵ&{9{XTCao2ES*%c+iq=*V1۾T%+6xTZuT0h"EX!z*BYκsC9_'W~Q[%D"х6b1m\Z(>3٣>hH>%;JQ@O?2Cq1Q+`AA1d<%Fb2mS앑äRe vA|v~ҎlQo s,ئOʁ9EDz w,YHcT}7f F- '#%S3,?ɻsv]6er{iyhE^Ћ_7RVy&݄or$B4"{-'^MZk'FT6I;ǔ|݇{ S܍CY:S [4ÚhSd/YuSQF P&lbuɪoen'q>V-/%풗?ʑ謁C9dn'Ǔo~jmݍ\~;Q=S{>4nTڏ, ck7vTV(k2KEr6ZOYgטqbr*C@C ax4G'箒ե&lvi3._|sMέ@7wz1qyh_̖7:8 l9)Y>sn~X*oaWnNfULl=? 7>e]z:?]|B}Kq6ҫcyq1jȯ9fM9<j֦TzP-t+oE9_ SYon=ZJFZj1צm&\=>3: R&]mF~ x-rCaC)mq#ۆu ^xtf~Z^峗}MK\_?%6گe]U{NgvuV־I;w$Y{kG.mwv>cΛN/w~hX!P38^7U*CtP 828QW>g\}ixI#gr! 4ۄ LתT["m`kc>Xs `m oc??k d'V"CGz-!6*]gǵ4\ s5z.0Rts݃ݪF+l]W>@;9ƪ`,5{Z,%@Z^eOR\I1k-J1fvnU'c]!L\IYE<:x[;t#&G$| ``"Nk#8^:x6n|NmT 9DJCPr|RXO8JMV<6*_t%>dY3 KH夑#&+]qgV VYL}*xL:jaQ9u&E+"ѝqg7rǝsOOT[M'ޟ5l +(W>T JDUqG@ J9oرdBmeP6ޕ1thDT"D[`d38T *tBtv2?ϬȅցIф6K)V9RobD @ ݵ4-*v f*Z.Kʊ✼IC8id-\LFRu/c7W&_kF}@[-`>H@Qh[|~ dz^u!)@!q!ypZYmwPU7|dwnM*guxOqq˵;(Š#)@ߩ-V`B p?67][h9a26&bdzgx^޿ xmߚ.pl4{tH;w_Fmwk7j]z2},#Gqc/zt_g++ b<]/̔{ eg엟f6?9[^Zm=gܠa _FǏ5۸CjΨi!7*+jQktҚ~jwvx4e5.lA%v2*Ly&/j G:f$gqsϪaq݂y q'}ˊUZOiMof7UR~fg~ỗ)אp ÀҐqQa{]d1w~ ޺\ov~6j1L~ ႕ϧc?cpŒ髫F^ꪑw7j*ASTWBPClΨF.]QWZoquoN<#?ĉx ujۥ1ZKJvF]5ruuըaQ)+ʆɫf̱^D|RN\/?N;=)w~[3ZWRUo@:3kes>z&w>s '&8t?+۔@3 DC8 /ʙӊᬊW(Y$yBAe_U11:Q[M:s.,Z=-Ò?_:i:=zUzU0P5h!|q}FW-pmܭ05V-ZN 1S%̬Q4ee>% U&0 & S%6 au,g΅ ȹaݪ.ht~{|qUoR\zwzy=tQAo***V*W+*_WM(2\NJ>$aPk yPb"Z{DFdt"aMSʤC0eoIf{i7r+qW 󓓳{}ngis/uo)/¥/.[.³ dn'^9CCGC>>\5_ 83. k8: )St(>79Oȡp(!CKWEjDg6)Pj޻ltPɪ+ ]5Q8%dJI@16Uj[*X GsC*{Jc4 "<<>V*=7UJ~T:mcԔTc,j#suJd/*VkoIkTEI֌ȹAnX.BY.W]=ٻ_MN"vM~ :::yt#wXcGlЉl J:QU\+9J5F/AUc(ޭQ1ʦdijkFei>ك9`A E#6C5nGxc Ѯ̆l:J&i!sMR- f協Dh-dX'uvg0c@t< NBRZiF~xx[ א|͸P"zqԋ7qL+R)XiL2FZ` 3P&5ŻЋCч͸P}hCs>9|k^oa}2q@яA!G?N3~^ kREtdX袒 i=a=^>׷:}ɛ-^MΦ=YIók(b k}|@1A όEYi DLCeCTQQdU$%`3KSwqj_y`tZ>pyj0g~KiqJ QP:,T DLNY/]z *X PŏjAGq[ӥ4y4w/wS0s;<ŏ8M.˫v}M`7|o}=ݎqQ VvvRbZN9k}_AYݾ#/Uvj VIq/HYIF4&?>_f_FE5SgNjg_':'ayǯ};ϧ,͎7w2뇿ϟ'</ޯ۽'D_{7 z7_=M>|;d15f(-tŻ_ӮW*w5*⧳C/N~~]7ޜרWr|Gɤ*oQS/i kyӥOtO隝.yE7b}/L´O_rMM>\5pKO.x8~BIgu|Ogg'[$}8}C~1H[oF/|t:I\Nˆ.Y =O>4y)"^-}9uLjb+Ƒ.w|q}YW{]}#mVd]lZF0eAzٟܶp-么Q?:m=W=u7+e|zReuz5DQV+ۄLegTiu)3?2 g [2wZǻ-fP0}SzxC>f2vohen;7yݽnL^ Z5cVg"v Ъ~scɷUum//O:Vc8Ak};m;[JD _+:WYvhܰ z0%$seH",1faȘ6*Lkn0|Q//FF]`@ ;/}D }vL"'P E$~vT"%fTzM^T^vsZ^An=o[\8x]i_1aI:-u^7 Z=zݕ\^wքT:9 uk ڧX}Ow薼I%"!vKW4Z֢J] ^Li5:SѲ;G90ڑn :BYxtUͥ*)FEmN^ dIyAxA-}%,&m#hCsM.P덍l/U&.8:H%z>5lz;f C1`Cb\kte33RqJL<{9t.HMH)h,؂֓<! HBNKj2.xՌ[^<$sH71x}9*L#e˔ +h@~(~Õ'6ET!*FPD4vb޴5rWLr)r48|RI6t`A=KU)nIFIEp!$vDkAHfd**\ڦĢ^NJ'rZ$-bЍ9^l ~!*Ik d @)*xYB@Hu l: xC _)["c(cD^4**1j%SV)*X,9Yr&Bj5Ziq/ItP1)^G+:K(eB FY*1w賕a"66liHHc1x -5&E!Dggh.S4#g?e ]r՚p8JλzFY#Í: ){%u+#| o4IYg^}[_:o$0{_(oHB !]QFaBA \Mpy5'GWibd:S!t9 L6 D! OIY: IcȐei!Y }}j˅X߶AWWt`:;νGT?hr 9 zWܓy{߽6}=Ow*ץq_F&]]cjybפ{^'5266Hq[p*;pܕ}nPQŠPW_c<5.XʝJv@ZR䥤2ѥ8R 璬ƚv,h:$+NG!,,(V@=^JT!K%IҤd=W$*mr !ɘgZ :)[]ߥ_#qdz+qj[oa]MUX+i=sJރW2pPÄWZMԋIݪç;7J`Klb9m 15CBy-Ʌz-_EB-e୯;͗K IHI ibI%JRSܙD 9I8TB)KQ2ɳVPq!e%fǝ'4Ԥu+dҚm" 2aH%h"JpE6G B'"H0vL0od@מUE\Ɏ|¨YTDP^xh[ybJNGRl÷8/tk rӸ9ah$}IJG+WH—Lʻc r,&K14,*W#BT 1k)h>bR:%$be1]UkVu ݁tNHwgkꅎH5;uƼ\@!҅} Lc $XZqSs">r\gY[UkY2(6qI5:@AAs*K m?wmI_!inVn:5~J(PW=çġ(j(-q3=_UW}MLkB<.hMĒ\TO$ց}2: 4&; Y8*v5nߖ96ypBT`}Qx\.I~3[_[t't/c)>)Lˆ1FF m rZdj7i/h_Pű3VqnKGhfj7y`wA? ~Rak0W@Fܞ2W/-a\nn{py@o}TҸ|RphĝQȤdK.6wz',Z <]v/h%:zy*sh8|mjG^G?.QN Rr=cbDmb9!S5ZN,0xڄͳsuD~^ Z*_Nti}!2) 5^Єc\ZRS d &^gա =^mI\\v$oluc*0]h[fO. W92e˝I.N Ҋ+ '!u‘d('\|$[ɷ[t/$w+Y\LR` V֐,1U!āhT>Z0뼝uK̚?yhk,¢,zCFqHAsЕLTM 'Vi ntB/ϕڤ C@ *R%ϕ7:|h-OpVF0ha"DD˧wBNJ't/c?Ve]Jfɺca@ٍsGSTJQ椅_B ,QSb`:88cR*cX{Qã%_:i{lf1"0¸zcH'R54hqLU;Huk 3B9'@ͅFfn(U cR$a4JXq_go>j1>YqOggy:S;r:VqZJyQ'LpˢOc:]ؙqr<]'6C: Ua 4|\OG d0%0LORI˟آ7wRޜ^l-MGA7Xqt6bJ*&{]d ql9!XFN}ghK#R(_7iM>B(nt+NF?Z@Ȭk։+X>br(W;K}Tsh ~-CTP^v|5|& sc?u׽Ɩ.G:Aqu> n/d^$-k'=>F\Eey6O0y*>]̟ٽtNQT?uɺiM/A8#Ϸ-bg:lTQY_] O`.?ᄑp.;x÷ F`}NA ` xG ϛ>i5ɀo\dkb𹚕lϖr1onQn-U:21-X;aME-:~rR?c }ހ(fv/ @x(_}ށ5Y_EbdIJD}$f1zu<(42r0`9m3L $Bk{`r|>Ā+lӃ6>ߖ=sb.9 Jh"+6Nd('V*pS$f$* idO շ6[qtb)΀72G=:Y=%;;oyg,ê6mg?i;0$UA, V1!$EDeJaZ&UOEڊoH\N*V4_{*{$y(P f3+ej 0H b2<~-k#XLOs411 s[ L J@lr0kQ0FlL4RQu+j0xdRԸX48MQ qZQ]}m(`<vaɬEI{[N.Pwt{ฤ͟/G6 8qJ<"½4IJe&aʐ6 |Q't~0E|ߙ ˤg]7S@BA8(X™2DX~@,#L@v8MRHB`$E*"fX<őqe%'(I6ƞic_4pn3,EZ~ A11T \ 0@y)s!᜶O>H[LQ(LcO'٤/ꚤ9MP̋uTs:>&rS Lg3]YJu3])>gY\YVa|f)h+:"sUr|4*+\eiw /7WYJ[s+)1 pq{ LTM: hDb ,;ìg :zug4WM&8@"hJgdXEfdmr߷n*/սS~pO3~>;u9A? ,+>B -sae_cRǞTQ8s<@q;E&2y[6_wpM2f5 co?~W"E%( 'A XKn6=,4ZD 6:@y}D wG mF%OPwZW<m*~R1Q|ѻh+Xb&[|>).n@RjB2&Xu{bzKq E[؎yDs-1KF Ỏ!r=\>$l*gn[zPǃ-_B'q8_w&f~`x4clự2&Msg#z?P12 $rf"gs:$9ϭ5LENpop+0(;P \E %h^(q˚٬1!bNً;8Rl?' zG+m%cFH>Bɻ2ţz\^~3E<V<"[*-Q EҤfzePڜl& 󟋈g?SZ7_~!ݣ{k?OIһu{v]N~r0=^_ u:ӟKE2t\db~\"~9Tw_w]^ǃqnI?ǽgIz8kԔO~;% Ր=<)זk2?lkD4jǜ6q]ӿA;MGk/XmVm;5u$/ӶjNN6ȯQj%}R*ZG~,+:$:]Pwȋ{'Wl3œ5i3.͛GB&U}Q+(u>>K.{_FI0ezB [fuouyN?yՃǏ~h&9O*LvDQVUB&U5'D/9h3eUE[=nsqai{[zj@̾@>-3#`IX:A[(Z:k/.l{ӎq/-;2I˃gݙ u9{_rc5IW6lb'3?[m,qmީP2WrMv>]~7$0E杶recrEOLR>`dR bVRaϧg1J(UPW!P<{5"pL9DCpE@IγD JnRPҫ ŹjqsŅU9G|?i8}Gg SV?DQG.4DfS>X 7 IɊ*cOif!zy%P`}+"E DLxj /JrB#w/5Hd':qUDZ2L(%gGB$ā,6HP'^j-zX9Wj 'Ed, &܁3JOC.x~>yp۔dmJ\mx''ߎ[ vܖac6欑RTߘ#۔JȹF^4l1葅 >0l]HszۑobMxrYv>HR)zTtdN!/f#I[ɢu $/y0&(%68Ɯ(87AAx]A9W;O}D'e͕#G`^^(Px+elR*tZ2]6^ײ,ne t⿂j+Zfu.0-llu7H2Є:Lۅ.p +4w6dtÃHɃO?{ui*q2=:/&$^SN}Qk-OnnFz(:K69DYy HF!g;|jǃ |%8PCD2~Yt" ]v=6]$ΊӵSK֢ ]6{=X4w?i h1啗w{ꮎtp?[(я/(Q옏GǓU~thi fn1N/&,5VA]UpUF9"sMP5Y("`NW$~X?J5(g1JWFk4q%HH23We/525~q_Wf{Osa,כ3.m<;LPJ|z]q*[%4FHM,ph9<9a/xV8a{Lr !KΦ %Yh̀.j73Ab#\%nm$<$<w!Bdh)0i+ǸT#*[ܣHmcA>K.oaT\ٶb$_:Ű/ R9 ``v xƱ\|YhC{4*hO7o1)·*{WyA}`X2D7F% sRx=XmKB2FRsv S %颠ƪ3"3҈">uҸYN2#Mg`H 1 ^wV#r>b_SMo%CU]U-8\R*-IX f&][?vb8;>"` ټ <}h˷-Fm `=]Xgn=lث#iTWG)vI@)*GC+q~0{*f #γd+R%ޒH_zIFGD[Z־$Y (!b#trY %Bɞ#8LlUq))KKd)!mmTe202^&<%QkpDY3hKMj=E5D*I>'>Zr8\>i| "(eM Qs*E deEP:_H(*paLB dVNQK̓`Q0nٮZdBBL"X XjgR əL<՞Jb,;Ay^'}zzb;-OZ+,yVqyͱ̄}S%dv` 4Quș&AӔ$Yk8 ʤ4Zx ޹zJ| wODzj'moߜȾ1kuڭHATӻ6ֿN% W%V NbeU09,S~Y_Jx;dRn]j<[Q?+~ׅUf04%ZžǑVn*-@(?}lANiK#MRJ$AN\Mcw$}ZiG>;P^XV`l< `ӒH%cbn!e2. 90ЎyѬ\N9& PK)sj|L.s 2eV1'Xr'j"Q2-;Zc˝wCQ&~^}YԈsJ 0k>g][p'ncd輌 !0d4Ik7$0)gRus庖D`^whؗ+gqgׇ6KmflW 2 Fbl0p,eDkY7~3`RxuW5|\\)YCX!`* BIk/\M6V奦X+sٰ]Mۏ+U LĴ< X2PD6V`¬&!).qI ubxU^K(-{K'Bg$"5}urB?tgg ? j@ aPN-x.dn"]bk_HrzUSI%bQMzj7~6 =aJ)‡Nv5v1Ł'{흺>n]M"kfzЫ֔*0e!a4@Riw{'>ޭd vMѵ.Ͼօ``%p?̞-gƣtýNo@oQ$wt6Y,o`4O"Ń9N4NRl뢑zytK>$ yqI/`<2ϗlEff_tǿ{_o}o_yGy_{04LCX_~hpk֭~[s#Ey徶&oGYmrʭo?(~7vBy5DzZVlڄK 0' f5 5?B%_CnbM!ɒG6z%7E6 O޷GϦ@oh # Q΋AϳG7hdё"\ȗɃŽEfck;aAyU=z`M_ Xc|,&f& ,pb2A`(iҌ[=Cʚ$Mo_}7F43'^V:M&}w|ױت˝ʪ>l4a;X;Td߷QLtʕՊU¦PY)iU} e"UKHt'i/,8\?>Q-Idƀ 2$ʔH jD @)0PRrR!RO:S#u;˝^iT58-jfԃ(XW2c $>Pq$S FfJFwm]ա'6+ ~:P:hy|;I&urr&}HοɩNOo߹ށIr"(,˩*8rt bQ%Л8_vlA$/zÕ׎-[|j.}y7g>syѣ&|-ˮh]76,o:昩WK_~* s~֦iXK5lȉK5kiK5Bk6$jn$ Jپ$Ѿ”DAtE1CWwZ]ZCX QU|Qɴ-BʖBWڮs+N֬ Fc "\Y ]!ZNWsmOWgHWqIJRWزb *J+D:Btut %+L.e]!JFz:CRrR +kI)th1( *2V֢/ToÅ]Z ܨ6&D^I~E~?otvhy /g~(_1^W쫫oeYe&ne(Np# .pdR?y(e-xE $Yn20 QhJ, TDFBWЮ\z*$KrA`C UCvyoF(!ݢ+Ivٱ=]0 sQ ]!\aK+D{h-rBtut$Ѭ$BGڢ+e)th-:]Jӫ+.%/+@+):]!ʵ)+! BFCWt(YΑ$Q +lX1tpw#JIz:CRs:mbq_6Um!^I1YDrhីU3tg(y*ϒeI/Y-@'Hc7Y}T͞Ye"!m>~T׏=N^ޖ(2'aU"W"[ymd>u2/`.Zaod: Lvp Y5&1MMTD^ EVŭ/;íͤJ8*bEoV>ovӀ\d{ p[XY-EaN&˄Ӓ 9Z45y-&h Ѫ[4Rshe9`0/n9@+:]!ʎ%tuZ#ak1bۢ+KYC!RȞ;zzA 9~'U#:h\vbu 4ZRSn@' .AFu]7 JkzpA[+yI^ʉ(B3uB=]%]YhHKBΆkV}CWlW.`sFp59qfh< ]5C):jWOH/i!-BBWVT3+Fiڂ sUB(jV䦧3+N9#%+|rW{3ƔBW_NW &SJyAtףߺNWWR utu>t%e$csM+kE)th뼺BWWHWIbKZTb beu%Yҕfw)ΡƯB ,GKrU Z5.؎4u6l6)nxQk˖Gdمgw.L f XrWR]%hEOR9Zr`) +y9zWh:]JCs+P.H]\Md)th:]!O qgfͩ5kO of(]t{SACWWR ўB3]+Q Έ.5ٻ8+CM^"Ba`~/ZRLR#>Ѥ(ΌJdK5bt@x>yDV,Aҕ'RW}~p ] ߺtzlQ6n0oڕJP>HJWCW,]1;cfJЕ} 2R1U`x~q*kq[+A^] C;J5񩵧6pf™w/riGK'4xyR<d-N^O7o~Ȧ僇n8ݐ8{^vJ ] |2[V:BxZ~vn̪ (Qu+z Pb>1`m/.?~g`l$AWtC/''!':] \ ] ڐeJWGHWNDWm7Xe JWGHW>̆ ?s?o-C+A%]8s-dBWvN7+N_p+)-ѡӕ tut8_8"G\^~<N9:!@E74&w$-(wХvEOF6fk0GQ>G 4"lꦞNy3MV*A{N2薩chR6 CNlnAeNJWGHW9'opЕY/L%]_Xc{[y~hJ:xb/zgqCtfJZ.:] JtutMgZ`fJfJF>tJo– +M+A\'ҚR:b1+q3t%pf h=+A镮s-ѕvqra3t8ۃ/c11L N|RN~~q!-n Mu~[nګ O^^|y@bFP}@^_$v?P7?}5]|)ܔz1>6}wk1+4\^\^!X/׻_`o/|{''O)ݾ?8ɩ?O ̯[|jܯ^w7/ړW)}rۚBH^;_~GȪxߎ<>u Gg/薅yxB o>?5L`~ q }ȑo~]y/>+ (㴏loO|8g5(ԊPsؐ|/l6L:l3bCROw}_AryE/^>Frq5r#orf`.NT6cgiMͅQ%PÞ,ggvΉ{0\_ט]-l\EҼ76'OֽvK3J?ٴٖG#ٚOw4}hStsm1"X"W6V4zC C"Z(6Q@dRM61JȖhXf2kɾd!VѢkӾ._~j)dn:dcȖ})T13gIG>KA11ӹ`И[_W(:ԨlbA~h`Ln^=$d&>Wyu63(v]yD㊳N `?ɏ@!}s8[P hsf0Τ1%lDFSlJ[Isi!s`@f3e mFi.f^ ՐR"@"%!Br z#8ˁGMEO6Xj^l5XmCrf0L,,Ǟ5x0f5D /5.U{;RCv42,J@vN d)P4AZe61L`5R-ϰ`򀢳|c0Z Ç_Myˊ#l|\5 (3TbC ]@2UvD޶I 19q'YWeQ\ZlL0=Y{mFe?R`T# ` G$uc6 >k5rPv RKDet((*qt2km ,S\D?5qܽXe%C fC2a E6!JyM +QJ՞ꇱN_4k*3} 5 "X<ίJK;N͐jPoh#01}1r0k@ 5H$&@͙f+|JmVAx`1g#a n1)K9AZhPl-KC%:cJC, 2X[(u3Dfy!܂+C SCl %RI9n,BU{LygD)J6w}TQFyW]TI#UUDI)bh=9hVd0WGȧF7KJ"5ض(ʻ5dYBF TDRG4K#FPg#*, ԙ)DΏA;X,+f2擊Mid- !pB\. v3vf\ʲiy{.Gh*>Y϶3& m3Ska&a@ /-*}t*YT#$]+T<u*3d`Qd2X(hU(+"By0z-$Pd"uBu C102Xuy@{T5@&[OPI#6H܉m >U]\UBN ~d}gȈU;6\ l.$`]0~Lv?>W|_ͻ<ن)Sm6dUW1d@@F$4V=R3/yj"!-]%@_141TnV?yDtePpή%BܶKAEKշbhǎ@H[^FwHԸ^vyU 1buD8+1j"9 gB $"2Б5v"8Ϊ$1 Yӌ,rd@CuqB&"πEҪ2,j)cE8ag4H!(vDScXgo4=> =i; j@e֊o; ziYnu3{5~.̄j; HG}>;>M6J`Fn q%4PF#o'Qfa!>o'"8$`[r FX,;1R)6Dyǰa‰J;IYPcdT1()Fr&/g=y ϪkP` tAq1յNlF54<[ -+5ɨVzPyXgoɺ!LFdn7\j@׾h{6x<T},ƝA=-pW^u5W`a2҂N3 _̂B\Y2čtXvGvӡ|G8 8lC}WЭxx@A4L^9%l6v˥) DL0rj!;VAjR[nIL](I[l$dipMl(yH_GB#R0r"XH5zF?˛ۭ{\v ([é;mPJ="BhDpJPt4712-~'/eo>Oon%]AmnBvt}W;mҋ}9y~ 0 7z|x;y{}Yh^d[廛Z0O9;M>{e"rq6dS[ 5en1cҩ3 [rێpy3n ?BPf=(@& N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@:^'P -[rv r/m @[E'ሔן  w#N3gҜX'@PA ZU+jTPV@ ZU+jTPV@ ZU+jTPV@ ZU+jTPV@ ZU+jTPV@ ZU+jTPSMj==o'ׯ݁sNU(ήg R r8[DxLlQԂc, _-ǓB}fZ7EC_/jiu< 88 ӛ;/j&pz;^ gq$6yY>c|]~[%㛚4RcގƗz\om ?r3M&b#(hQF `<Є8`=E4OSLr` II%:~IYjH3``pEr H- Un"Qqu<+ *PpEjy+R)\+7' =k^W ^zz)\V~*e&S2WzNUr@" WPpEj;HWG+z@B\`pEr HT:[quξx)PbH. HR}J^:\)`NiƇ+R+xqE*:B\i pE_>rCHeQPQ&`ݙPgL5C;5Mq /}tX]npF!=Cg !ɡZm#9R<1Frqـpw W$w8"ZWʊ#0-쐲&+k@j_!q*IS?쎷ޞ0|x$^9 a?T*3+[qu\o/ =\Z}4q%R W$ ǻB ƻ"㹟Ji+WӫV+ W$ء;PfPquR^j: \`]\#+RTq*WG+-40 \m[;H.ԪT:wu2c+> \``pEr% H6}QJvYoq8cЮ9ă~{6Haz~cI{r~ew?%} $W =5BO0W'!?`p|0r06( w\ʭ-d؀p2UC&OI%9Wn[NpR%X׮O}ݽJ/- Iq4uz(FA1M*>FL^/dʻ܎4y ̫ZXֳLnW[&Ea3Jn]sAx@-.jo͉eLt-SrhʦKM{wޡز[Ʃ~5E7y4u%x9|f(_b>Z8mn$,d<ӫ/=ӍSs~>ʛ}8o:K>LްXC~b~2GN+prDz!UfS&۵6f/4fvvz!0he j)Y.!eȜX!m% ie0 Dɦ74t9s~OgV'NAmq6Ȫ`q*sLӘXqY !Vd۠UPެy_:WCFAeu}o1tsΖhMApm[cw~j:őnC|&+׳EOm U_/6IqlpLAhտnyxiZ/]O7}秓qhAq빶X`"u]A`^kBF4^iEYzZ?^p4 C4fh*̚saY(!=+^, 0S6c(JpZ`Ĉ]):cB̃츟u&g~ݬ5 wKΡ ƑɺalFQY~$O2B4`v.u_=נށӯl+ЇʩR1/ID/ ńa7{쭬VgV\NjTM]~_~ApN0'oY}kƗ|fx:sv{'z Ē$(`&]%'Y0C'dGpp,hLǍe_gf[ qfɭsϋI^nY..ôLX&@ctBRVQbNx=^K9(dfDP< E_:_g&r`SηnGa%dqC628P+5>tJk}0Nw=6Xg7}YȡCt;1N^-h`~rCf}Ɠ<Ǯr8.&Z흥}nwo6TJF_8)vBFF4ila?Mft9^ޜl Bme:P+M~8:_./ث' g`^ۓmwh"_-WǬǼ[嚔?i0d^eh_tP椝6ZKݨoc8s<1k@y%ӽ (MJӋRh9Y^ob`Z .˲T4FG-Nhuڜ}F c02C@XEaY.K⾠(=hLΎojOַOǷ|N=J98mJH\8"rVbP dg;Y1 AX`(yM|Es[^D IZtt 2;!5vz05w>M}c;hk~NqgdHFpOL&ŰbWfy `xv5lIhE8\* DO.:mS 0.JmN9TdLdWd+8ceF<jW;鴧:7~jr_dq2^,pb;äb4ʉc)MN)cAE/{> &CR6Ѩ`TQ0cPsaѺs~4lB̾P38ڢcjJ>fAuI%R:*FPjØ-0:3^RGSmPs2z\ae(Q̢YhqDFDZe w&iA: }!bgq(eDY|䵴$ \0$-I4+Q -sdVS"06Za+r,~?q&&nev(/riؙ<E\}Jꘋrr1g39kuJm=asH 8^&\| .Yǡ<P@ئuo|ю;CNl(VYO'~tWsv86[ekY{>~ 1ZL&mZF2R*Yj&=>9.Ľ͕}/Gޕ5q$鿂Г>ٞjdePԑEaL+7$ 6IQdwuuvUe֗YyKSsm?l2] D|on~ wQ*q DZXAr) ,1$IFFQaƍ3Rۈ,7}rѻзu9i9~pqX#yԊWڦŎߝ>!C (əSsJtQ*&rbQc4e^"܊Dky R[ T.wNy@Kq<1IAD= )'"N"ee* Dsoa|3ə!y=b,9A]:_gl c>IGiqû9uiH>|BQn}oN-l*#<6"IHA2ܤ"8MUHIT(t^^<6ɣVIf BY'GDFeҊ!mubLzd@Q-KN$O.?Z=ku%ǃzus"tɋ|p%LfyCtWt3z-zZUG!mD] u vak^Bg7xbP9@@`1z4S2J$B)+,L/Ư/w jι+1~\~y1.s;}rs7rR]Ž?.W "<( N˴RYLүtAoZvTo!#imQ𩮯'E sFJg{oMHnd8Z4-gK'#8|F'3JU"+YNFw˞|eW(GQNBuw60?-{9z6YLdOc7};_p"U}i}48O'ql_$uO"g.M? ~.+c+E%N?O/kWSidPipkQk1x") j/~S;#:we/,R.>*WAP(fjZ O*v;˳=+{5D5z4\[qDdŎϮ}{V#65Yk۞3(=3,ͩzeWUs~IaWwpwZUz.n{ޅ$#Af7vo6?2}W?Lϑ_R#c juɒSs =orU n;ȧa`,?2 Z]iwl6o|flL 2IuY2@u{KƲ3lƼuyoWTY /j .v5쬘2]Rz[Ѯz䚝3 R omQ[8)@` ĘDME"AJ!.#;ñAot?3%|::<s)AM$jXa 4qPۊXF^u" 騘âEwgkzA}ٞfSl$t_FQUFQiMAuT$I*cQڷìŞWd p?[v=9f5D4A$kXNNH29V;\Z %޲$fĈhYdKOS@# 'x 9]3s *V\kQGDKbɉ:#AKi7{ΰn|)7yyV𰾽='O+ݖ]_pB,tBQTST{e(zIb))Ho EBPu!HPG6JCP?&eA:tt]l3r 30G[3;$MPa<. @@qeK\g,P3m0 jwNۏiBI ptrAQ B;\,B)HoM1zZrDuO*xQRO'rܣu$ۥrׯoVs4m0{2!Qݗn&~pB!A'mC!"u`I/ΘQ,<Wx%09!-e}5]dJE7˺OR ^K$Ѥ˸ )gRiAbydK'(h"Э;@ :t. ٨<GTFH]]튜+ HXtHh-밞L]Uqw Zui;x?S[aѥeͱGGtgX}վ!f!ү&grԠbZ'gOٓWxn$P ҿx2;V:&?qD'3;/~.MߛQE@+CD G[ Ò9KK V5 ea7ni\1uZ7=V4jCt(+u Z~Uu|YixTw^4TlY*v2a0d?xvF> ?\JA|m꫹}q<Gw%'aFұAtIFr+;վw#FҜr9Ht^f.8p ׉<+bwJxa,~=T<H'+ j!=+@r5W `-'F~tyP612K'U3_xYA*/lCB$A^ _ֽ.u8(a/{-(.VB)r.`b\JʰkuI2YU)WK20^?rZݠ~Q S=sI~ }9[HJź^*`u+BP}Mګŧ_ H!/+^]9*>PRASHBI"R0N$ hEy;a=ה7-etW]oZljΡsgRkmlo7\a;oOFx8c,!,VBaGs+ъ;ƻeJ oъONtIJfEEM##W{K&A!2bHPD L C L } {K@D)RYsEMdQQ)%tbL!}dփ AXseK+L 뫢+VE3Box09)AOBny2⦓RI|v0f'0AޜJ6A>Ǯg-!YnD^7o!]NVBYCư$ȰF(gg +0;luCfSgPdqmڃˏms9ws~k[H9U f1;-=[-+[Jr2odpvv:\<8g=ޤuIT8Z+vCZ߻2I?{W֍\`=9|M-i_60%UdѫIc(ıEyf 3%O}blzaI=r6(RZ/irwth$YwH-+jY͸un-=/,lWZn`sޅI!vk֧Fwp HWfNق5ӟ7O׬v۲c:^ _ix[Ü(ؔgË^beY̱D?~=?^Ad9F,13B_ds*)Pp h$bfibږ?la^s Kq0Z*2]࿍MPw[NٮՊF|yyҀC6%VFK]Iœ]3(Jj?z`4TݻfMQ226u#G.ORF-YeX۲wKVY Y+ >hR;nrrdj38Yx T&>E%Ecp665ɑs9፩]q6@{2aȄr)GVOr~ZxڔK>MʥQ ]))riWajp6]yŏ Uڛ*k* /Uj{Xgjѹ:UZ 9qȣL{EJ·QPʗ&4Wx&l()s=UjB-l|nx-7]\ӦsZ){#jJh%JzǑ(e!LWdny\pp}fcDX02l,NfR$I<9*`WEF})kie']؟)$fGer2ƀ1 ͩd,r3ec""?  #YMBU\Ȑ :mVd-YA纴@Ws1$ʤT" 61rP$B+ҠԎ)+E .o^v~Hο_ pMqY\/KȻO] $5/[yM^\_]:AhJpG:/׽8~&un(.qq3n(cql-H™ӕ|6:K%6~6n5KNW倞X.1,H.0czrKrgZ^8O OLn? \Lh݋3t[ԯYz;vWU)c$2dZѐZ90FFk'>FKUҼܘӬugڼЏ]O>~z!.rz|{ak'!kb^+BFf\;TG4 lFqqUfyKBSPr~v}п<_rUoerjz"vT`GOrӨ*342"`JNQ]~;!S[:M;;7@RGo^_-],^\E]c'?Bd_MCzNuQS[TI&V WV!ޕBl9O&̛AlϛB̝e&a I8F^X<$DqpFV8]( ||mlQEdM^ySiUg1h $L f ވP2S4<Hg84r3N1~x↟e+9CxY칱ruT^NzQ4&:E>I!{#2$-Dr+eHԖ[eHTvJ!X#QWDhU!WcQWZڮ +;I]=uEPEqsDp)uU} uUꊨԌ3TWDŽ`ѨB.?tUP)O9+͸~Dꪄ1~4ꪐ XUJ3`\sTWZkBw,f\͂u^v^- ^*2o;d4j^˙}FU*L@AgB*kjFR\Ozӈؚ;c׏~iqt\`ZsF 2=g9觽2>7_@4ye&\0NS#}:B+7O;+VD+Rg~vʀwFrrj꽡ԴZDApz!7.xmYJp3}0@=&$O,hHL ݟjätRG#0 !j9oI]r% <c\D0/ˏ&}QZo*r5ءL*%jA]]*)Wl)ë/Έ= ⓺{,]ݟ\|Buuojxuuo*M6 Uyۓs%BG *"]]I]=Cu#RW` G F*Զ_]ճTW2\|P۞] h\ؒ(y?Z^Mamxw>vW_AU2ZVS1ƜJ%ս/Ө g4|^\l$M'9ņ_dѹ$pPIDƄJѦt*%WWk_,2oŞ< =rV ŽFLj*&SDŽo>eĔ6n6U&hkvJKI>Y-W"+HkPN9wJ*:MD#YhYz5V2,hi|UvwT6%?ψƦb.Jϵ#i'M JƋl6&r_䬋͜X iBbRm $roIb eRk2K Xv`81rV0l2![l)OۥCQڗSo M%Zhl2g{R2$$‘<)Jp5DI&LQJ1[\H&E4($58 YLf cw>|.t/I;ww8}Vl7U|yn0)t{وE;cJD.3L}d^O *@Ө*{ꄜkD]Q6 :̞l؜7@- U鳑ȹ[⬯@G$o:3X<;Zz#n7EŮ8YOdYZ+,5*BWMoʎ0(ST4 e*dX3Y˘'9DjA[dPVÙK s;z [kԐj&EER BP/< (ٜLk9W3Tmd&vdUb![¥DfDžq_lӶIOO;5?O#Gl4b[Kk Wg;JN& 0 "V"G7WHqWH%h% d4+L|AZت,gL Фp =LH!Z@ xFz!2"Vg;"Gԁpqtk:]q*"#Dz14W**4H1a0$V;Ad)$CLjEFǮx* o–N;cF^)?d3֌bn~շ~Jf=^%Vo=}%:˞G-fSٽY/Z1Zmަ{ }P3L,XLZ:#+NrAhF̊e£0™mx >uh]ðF*Li<6}8l|mϗK>}?O >}bd ʖP%^NR!g&r&8M,xna2(Rt{#­ Bo Rqs>xh8el֘H{Bd1EbH @.T%VbgP49 w`Rd񭪉F!}BG@1ʣuiJL>B孵A567Sfzj=`K>yѧfa0}-UL(I5X JʹO% LoYf-:3ާzBO< y2r y2˷~܂Jƙ, \Z#O)taVϚz2xvg$"I ͑4`tʶԆdL>NYO5%Aӓhi 8ONqV訵+jMP˗5dB/ꖐҐrkX/ŀ=ӷgkoi 3x~|8-j ΆU,OFC % (>a7=}Gwnx_x{e+gů'8Kt՟[|qz>MCyQ>>}:{p>k;I3ynxE/ u4Ygs8hvOIgJ:[.sbvZ"u<#4/ʯhXTWI|<2W)E/KLMrTp\\_&epWW?yGժ4cJve +(?jhW ?zy6V"w²|Y3RHW .'?gY3;AT&WӥuۧΔ?^7`ΚЌKfƑ*5|Vţgj9ӉuiQe&є3^lYURQ9J瑷욹v6:8[Xvur27NSsL;X:fJ-)$˕[j6׹7] !ҋ@][sZ頮aIW]wab^EQ Z(&=vnE>̮0]-|.LJawڂQFA]dfy>1JI%ON^vwadņE߷Dc0o 0xC-'d4=`ëNė7t̩m.w=hg1dс~eO/P=LCcLM6`]$Th8w<8 yI8>`3I[')'NQIۘZc'`%N)F;㍗`h9 Ndu, ,0 Ī5!Spȅ`IYD6HP'^ҮzX8W/^l(BwBW"alhq'rvZ` wM_e=3LL{9q׬4Ac.yPX0䅮k":r2Zi:\:)dmTѤmkĹip3=-vdٟ"Yip Ɩd5DOQ@Ch ]lFDd2!}1eO>Pa%m&920f."hUp6EV=H=ӿ*R֨h|!=T ;j,eO:p==ذ=ӟ#92PƓg eDRf4^yɍ6bT&hP`Klj.@r]tݹY("4sLIlNZsBKb I(LόnŒm}^vK]bcYmRkvݨ &Hs=lbhU{ fT'"$lONnQ!Ͽoh~Lxڝv7{qWڌ>'Mh\񵲩5n[B{p}L래uƨ[cZ߭2/85VA]U`]*#Xyr&,TI'A۳GźXpy<(UL^-ī4&2 iNd,8KpGg̩Zu#Ь^פ;4Q G??5b͹Bn[vWX{w?TO1NZ%w8MN&f&V sDR0AS 6ܠ!'sްq7(׽aGYꟻ:DԐ%gSZb. A/],jogL 90$Gu\%nm$<$ܢә!U.$J$Kz`̵s*AZ\~ǭG-9efJ9MnuJ?N}b1w*=UA|Uގ{5lMy%VL4'H7AGט [/V$O}a)I-$7й<{tFRqv S %颠 YvgiD>uҸYN2#ug`PA$⅘;w.q_7gxtXb% >xN"!;h LBeK%q-Sw5٣"ݷ*%Jm0V/Bœ%XR먕E\0%n)D6+# c ň+cC6ȹ X(5%-|V:Kʝվ뫑e!ʠ'I_$yAӿs =K"zAćhs?h9ߕ7yS?x~.lPJlr4Q(x$K3 htb7yv󱄣t4>$6p+HPo0z ,SNAmRkjmL:<6m-FzyE>D}^M4T(.-m>MO@0jBjc M73a7t*$y"As@pE@ q "-YPFF aKݜb>DVL4UxDf*>. z2&̑%!\43h|B"gcޕqd?3źƮG$$q61:%FITleᆵyJl""GUWx @heykټcB"{SXTѝ/ᯚ<-,%|7o,Ym]^a_L#p\KAr58(\k3*m}Ț  * ҕ$LhO(p$X=F(3Dw9 @>&Pn쨴)ࢳU/R5'ʚV9yćwzUښ0psXƦpl?}9UȐWHg רpn*G6M*dEʐ)Z_9wkaSx}ĚUZolns1IlP:ES,zJeV䝶]=EP2x_}Tӕ רy0V7yju<0fl˃fwHz§l4'ݟ]wB5N4+O%f5G|j71wHy=Li; +NY: en%^ tqR2q\ \K8D]܎tq.ǦhvۈX*1f6b8Nau0Q@4XOGajF,rk,DDLA-a$EV 1uqf5.z_l7߭I~dZ<~s*o\H车(%wN<DT!G85\LQ8Ḻ|39qk}LޏƑnW,N_d(zrK#HvarB!``30B_ T&g}S0D1mo'+ykto?AL#h2=sk&'?Z@bIkΉk8>rR>KgeZ? տ7/ݍ.b`)1k;rlr$?ލ ,vB- 󪖔l-V5C*U63X>¦+Sr{@gmoۣWJV'jT)2aRI sX> \n^h3J?T X?̵װ?]?}KL/.?~NK/HPxkmAM˺&MSŞnXX::\fˬfnzOُw_uLzѻl-ōGh J3?Ȇm~[tWQPT1Dr`!|J)K댛Y7c'&v_D @O0y xjhRpF4)J5y ,97Bj\S<`s-:)XLjAm5rn+6,i3pR3Ey`"ɭfVqng2=d2#&Z 9L9=D`B]ȁJXB ]HF"^ { Ju|bWZ)]%*n+dW + k~2 % Jrt*Q)iî^ kN=>C!rxaYc̮G.ׇeWϣVH|Y<1 vvcN'Į0gaWCQ{(t<*l+dW ) + X\w??|5jq;.lx>a+) pB b9Hf=Fɾ ևܷ䉱}ztJt_{z훟J95V{32mGϭo  E)*%LR`PYMRVdDVak$:d4R(C &f: 6 i7z^x.ܦv83k_,L[4?)^p/fxߍϡtr^{1f %w)R.'տ}\m}0Ī u7d_Nlڒ/=JeEg:pӪx^S(Aʘxnr=έp:IRJpt; %܈WJ"R3ip4L e LI 8H015D; / eQf@m䬟l>o9Ix$9D~WCv9m-[%o&wN5Vgγ *}+qGPXzAP,0y1PU\^JTV}*}̀ڍ5pۯF/xT  䞳,v[B[P%"R윭Ŝq*idilRcg@& ((eP8m&P є%ܩk֙8ws6ǵ kb1ZMR@SqS!etZ}O>}n5_xe7 aK,%&'hkLȤ,9/zJw *ۉ+@3{6p_?P~2x.' {cOj1;rx{^UyӎůP6 I4iԺM`,HHBzG:`$ߓfzuz΍uhj8/N&ih5P8%6_/y8GK5mg> nf`źNztx4}]%l!uݳ*M_6RolX}z6nJ_NhpVu8bp6u|{/: ZsQ:Ӌ GÃ4W?\P@D"Q i4]yyUDܬ㳳 כ_>6:v.>.tӮQȵްPOv{cMڬtWM~Pm0mG6g9znbE*,ĩ9? #j:5~K8ɂ\F3lr*}qOj}q,ػ@Gij/|q# 2H0 u/RHC4TReV%) ⤖QeOZR4%|\EH2hI+>MC(:g8q 9uJZ (/{kx&}uXKOm(9QAa*5lT4PP!2 U4Zn.nꚅ9P+0IpsY%~){jo.Zfz u;sqW>Z=U7n2:.0՘Kkʔ,%4@m ep=S}vM$cFUj) n E:a=nh rܙ.XFb'/_īnYlzǨ  RgV2J3<hJlJ6GLv䅫kӓmHzm)̎(6`ԧPp݁*U,$ohlsJS0Q+ھR˨TlI=H7QE >%D>@]pA16QHC@x TMr`^ v ]`jmJbl`)ZLيhy< -vj &z/XUJ}7Ǘю LbRry:FPs\jt PRw 6ܛJ7S`0XLZMB6gJ̄$HI2 mTEW9lږh=)EG@)$Bht:%xV)tMnX}ިN7 ˬfrK#6/G0KL(ޯIAtP TIƸumŐxJoYns;XpogdEJE1JR[&lh^DVsg *@v P5XZLZF́,Z)≜6pأ3q+sWZw}$) [7/Z=O:DZHUBJNn7}M[;(CU@iYiC Je RXl,9YqPpZAtq'Pc%b3dU'i %E/G6YCt"!I'e?qt2qUwttX#|+FZJLǬ$ʘD~ǒ`aQomyDXڗ=LEh|Ztk|6K[iOYy~Z Q }l܎bPiflMRvs0} yPhsvh}`3<.UN((D FaT aTmUXeJfoiLt ha.VkfD=/2FZ9P낒s71C1ݝpCIS;Mc- )߃bbZ`ݗv 阗Y9E[*㜟|ξV`,4($4GhBS1[{g t?*q+N(xLgF kBpV 䒂Qw`*! $ &WƄ$ zQ`.0 B" 4.,VJ\fAF>ug@- d금YO򬻩߳Zz4uAI[m'qat:}e䶺A6Z}ɎpƇVr`%h4ДVpRT Hbg6oxd>Z7()&=)Q[vW12#)-Ɯ s72UmgB|Eoˤw\+ל,ië~|爭[Cc f4G8Tw.Fc"jcB(09m %&yo9S- C2'ZK(IItY$l;t$LEt!"'Un'\܍v8 &殠v6If&>h+0^ӕՔrhAk -1g C1*=vGV<\H! ,d 1+V"9AQNɷĥ->FxXmzc[D񀈫8io81z0\U$AQ`m% FB`"ݩ5!-g*+"jm8Bb%pz ",iP0CFbuJd#g.Ve#\יKE!msxQR.4hDAPBP<-" 䀋ٸc[*?8=Nh84<79/.HiG+?̟W6Q@y-xΏA~'Oou@BDʤ @ )C钜%BF"^Xb`0鱠[qk^vɭsKV>=a7*Ѥ-WGl]Ա:k.K>_W~}w2pB{S0#1+O1JIDu;MDxDHk^#r-Xc&H6 LGT]FCRg0b,&c3" )pR"! wK9 Q oa`% ;MSM^*g%VW"x[6 SG΁G'_d \ iH Ua i$74("' ǓsN9 [`.8\Ձ4& |R?X?}T-:dv]#␔H)mB4I_IBZUCw&9;Bye:6:51# %bBGsK6VWqU.r #b[$ hPEAN7;Hpk}<=MrPv8L e֮/O%x @4*E ɹEpx;F_QҼ^VBX1YBRE\릢3}ԜF78~_<'mbrPÉ^ 0BUU qWzV;4^QO}?9-x6v_ +MgOˬ7d+xſ_/N?d ~܅aL㳓o=wٗ@٧+u4N,MpZ8\zSդq%-LLWįPy>k0{H/P}:/՜TofUbtL0掌jb:gIh9 q9 IF0<!X idpx$HbyoyDHŀt*b&z}w;aҝo]ao]uȗW Ҋ:`}+(j-J];Twye8AM&3^g}YePƯ'M Ϊ)%7+j}*6杼{ ‰mmk-wn#zwh: khGI_2g˯3`ra] xU +dנܟ"*KeE>j0, +|o~#t`R0Q20J)mDX3΃Gr{,PLhL37g"}Y?~맒LcGpVbo*žWW?"\i)5{WZ*U"}D-y*Qr}+ԓcj3v6"WGͨ}hn肫ͨllWWN=V)Gps?p|_*Q+ծUr>ՏWD Q[XW*ɾUV]D傂}(L<urq^:O.̣N\>|>a;'_ߦ?|N1xaYv@s}e- X6 ~{Ͻ8 rO iSMuj8=3O_aCUxզ[_W[/.iϣmF'4q֣Db=!V\{l0&=7w8p& &c3$^r-# dRKJ1(P[+P(ۧ ݦnYب@FDyN[ΨP)a`V(EFO)GFocUG EέR0X`1rbrX+"3=l}6 };uo_1E;v5Ay$G ԃ>7&da5sTHCKK)FDoAC{"ol(A+z_ bbRo":"R $L d`7 Y|&SLQ~a/si5f  \IAF{dwNi;,vA~h0۪]Go4"x|~ZK 'D-DD:1˿Ӯ|A'7,|*RI&KPIit-IP{ssxhznb3vº{E,S3m<-;e挪xccn$D#4Fc/=HC19 Z*\sP HXG3Obz$z;fFcIJf}eD (&[}Bm侖B=ZyXGu's{"KQZNYKH K"F$8 <0k#?jڅt`F3 \heRK2jˈ0Fs4TURbhdFJs2 4B*x DCDz1FH0ܮϤ1+ sVe4ߛ{|qV5gUnLgJ~d^o->[r%4nfe4sVoҠxc5q;/-%E(&C+wI>r"WKդdg%NA DŽQB9a'rrG'v4{s2<ݜ#n(@Jl$-9OO7uIrׁls&iyάxsl?N6鯑BggG5B- \S: eTs:[0~B=q#C3W_iҺjk{/^-f+& lf;_{a Αケ7/^!6{?en6 ʑ\=aa$%kY$1ר;w1k~{3|egaq1rTF6:]Q׭7pq#Ms$ͣӫQb㔞Ъw;mST!Mk;گ+\ 9~W?:ן?|eë?痨qGw4 wW K MFlEXUO6|q׌}Ĝ0307W_Ow7<mS{5wu/Ta+U[wVOhfY f/C#B {/ F39>hytMJ#`b#>n=FS,0 znu"hYg7ȒgG F#_Fv6i2.}mlkfoJUƸX.h"Dh9y&Ua"F8 ;"w:UMUOq(Yo^o9jz QK+̪cD "|NIݟG}>vIDTV} R:RUI8P/˨b(^/5I Ek.埐IJ,O">F>@v@&^%1hdL " ւ6""b`(o*O(O,p )*l#N!CҌ:"b@%FHħQF.HAfJ'\YC+T^uz?y_jV$TAN5QWh *G/bNOrj^ |hC$bMW9囪s^`ш%DsyiױnnUƌj-mj bHSs!>Cg=J1(pamr⴪z W_'tmC3t$mBGS)$GXeVe Np44_ ;nm:[pmOF}߄7Gs/ש0?lO^Me7`_|C3Yf4YlwI%]E֎]/0-|$7NmGm l=l: :ǧ֧-5n֟[mvF[:qw[زPn[=o|bhZna鷛}5B(wsz[:ԣ]z%z?9qVaFOz=_yȧb{>?`mm"ms=MP=mk{OBAb??i-dQAJ4cIβ"G_=(?d_Ώܳ@~@A⨁eXȴ$R`4ER&#Oȅ԰&e`PxP$ƅh S2P ᑚ@'D/tvC:RZRc'G鞢vhee%ewJ~JåuLI\ 8IHH7'$0ke4J L ]|ifV$ib=f^̦YaR=(~tfE_b[Ppl.ҸRu>i"?t )%BAHGH=TNYZ[`ܗǫuo(P Z1q'biĎ >(0q,)H cSDxpi"M.)WeXՒ䉓S]@ mA{0aAp?)GzxB*m~,D)[ sS$Np,cʧùk.DJ)bv )xk#mhRKP%%i18!W2UΤ1+6\@\FRsc*?& Uy<{"(hG=Dspo˿y>$wAN?2+C*du "0˔u3Qr)xE~lhj3{IGD.mA+xHY( ND 3 0KJFT  IQpU]M<ME xE"Pq1L t,G-% =mvёyDpn.QN=:+F"{2eo^fig`CT/E%X4hδ "s9u*]%cfAw}-:AslؤnUӺ;wO6e\) wqY> 6C9N|O'K*e{hGz6ral!VTV<:'2Q-8m?u>]ſ!U-yޏ: dB(g`/&dkR(c(sZdd Qs宙/?EuDo>Fx0&,9pƺ}Vqr0>) #-#@:1NGSoiH^HGup ̠9ewTRL:}8׷T=7=T1rF6I cG}5r 7co$4ﻉ <ω89 c/".HfWHY@ݯœH2cOaO7xxSK1zqRn|!N~vST/bb'$N.cNGw{VE܅ߣǾNSBC"5zHu.ifSL]cI""5NO @%`c NRO2z%Ɣ0x4_j)uYwyr-I)eي܃/zg~ Izi(U^&.~uO*7Ir[<_a]L.71i@V+NsR^G36FWQմ YĦ.Up~ 3vVy/j1GRk{=8b֫%1u1w_]0m00r{{E6-E[Fr$ys)Zђ,Qv;;fdՏUbj[N_]%Uk1E)&wŞfbI8ehj=]'mͺ_r.{C;7"\(]>Zz_"Hڍ06r>J3R H4I]ev%!:*C>: A1H铝 RW RWR>hJH˝ީ"3ʼDL+%aEY([T8%3xC-@rR!DbXH,ȓ4Ok|L|f2[ӝ= |xC?m0\œC5S4/qW&(J4ZVQ_2:Ř'] {&c]MJi*˥T J#c1r#c9]\^R/XH{,<*(cf͸%aݞ{mmOOAF]~mzǓ&FVs RCH9JZ#RMNãltnкiƞpg6A D #mGMLXVmJ')Evq\31Ÿc_fQk8 L[e$b.UD3NGQ`(PTs0DŽNSrtFaD,FvD\j8꒯싋0.{\4PȮ<ANKeQ`|D](PYQKp"yr3cbWw싇0{B8k)J46×zܩYO9RFȤ=Ϩ:t_~<-rfALsfb Q~~C'aӇ_ۄS3u'|LFG'ep;ȵ|lQ.!;-14'&hr{;nSO52j>5{c # 8H&QQD'5pI )y'7xd:=BOu:݋;FҚ{O'J_'77uTb\O5g|/v)NS*r:&k] +u+G--z&a6q/rvvmj[Cԍ\_/di29ryCzYXЖE]=+7E # YD1XـM;uc/)ۖ3'2fGf.45'q؆AduyJ$y> 9(H5ؗ !>% ?DZ˨(-DL[(M9wVQ*FCGɝ7u5Ii l|)22~~zLt8?X]dr'R u&N%afeqW%6t%V|Iw6L չ#И롸(jLε{fٖ>9H p3~miixd۩ˋS-Դ[nj:6q7mz{l&fq:#mÇ&{gNYwʚA(=4zh[iؖo~lqV!S$P1DC \2G"w7{Z8g o/2X#wޞ¬Y h+tywqi8FH"Q$D;H(DBEH&.irpR la|ҰH8\h\5!ws.AE`jԈhI,9;}tHR}ɨx40(*2c…A@>(i+h"[$MB4EtɱF8$(7LgGIaFsAtIFruʽfjmݨK2&yވPS"f?RF3x q8||0B>5א㟡t>6m+;?78z/+yA-{JpT=It0s'V:Tx6 IfcL9\8tcab ;y{,44X$JD5ތ3qIKWB~˟krRN(d+8]Ӻ+ڰ_J=ϟ>pD/\q}%Uu1UեE_l@F-)Ub*I 0'.NK` |6r"&o|e"!mu!+4==mž2+"U"[y-r̅A`g_M4sB`Ker}*3v=}c{M 8t?Ƌ5y*5jrS m+UJD|@c}'"8TW(>:(wR ^K$Ѥ#eHb!*ZX ZHE9z1"|4X =ho[1 B2*Z驎79g1rv%:E87Sa nB7dE^&C^4}Ibq[.{znu~ď];\3-kFhF2KϮ$#ڧ:L9-P] sgOmS7Vq`C(a0= `X|X^ ;M _iZk~n[x5WҴ$7knҚ>߷ᠡ׺ޔN^*3Sе$eNj QW (po2"h%4F{]ހjߓ^&X[/v^MpO)EP+DQ9LE5hze8:|t^ߓV TԀDkb {PDXn"A9!JH<9R@4`5`UqQ8!s1K:e%#O]K qE6oVb.PgTO,u%W_G_5ZeN(PBdL|kb/j0̴|Ƹ-'܎WIy¢CWU?Ch|4>1!wYUfVJ+*᭮l7aiEVIAC}I-Gg %3Zjv!Yхn>_p ̯#6Yw_NQ{3jG'79I?!?7y[JVݽN_[*۽<>U ޏ薅yxBx\蛯3w3S3>b!ޡw9=>Ocȗ@ڦ^_x⟯&+zcKC%9g%F\MBfYɍS**]C"-|Wg-$!>6~Vߦ?4uq9{_>夶'*c kBٛ8[}pYP2$ZT%(;5hsL9Sc.T36+V|ʹRVMT; KMb)dُ ܝZٷn*DodS7|&V ڜbr dl`=$ZZj j 9Plb\E3j\zNѥ(Y}]\ rTKԻH:] \m$.CIIfLs} &3[hR4o\.蘸p1-^rJF}:{{"7;Z]h2Uv1[c(Wh6{U!cM":Mr*0H5\J\K>WY 5!>*D^7&#I]KpJ&C*d8%Q1'ѐ3Ǣ|}aj|8o.B1 yrg5Ժ!iH%%Qhb0]$#d u}!DMQ[$B$Jq -JI!a##[$3% H/&-b}C5 X )$ښC_2"$sҴ6RzMEO+6է@}t J=uk,d0sSr`WDTHtA{ ub =56d#oG0Lc&yˌWh|n l`Ce[ `jbm>^ ipTSזG80P8*PyUѬut%,dЖQ5!1qQK^eеЄŮ5MxٵP;qEF() E>R5 0r 9}Ɛ_Μ;Bj( !&dYg5"2Db۽Aц=rVczhՌIB>s\S&r_nݢbF\jX14p U86oʐvND0B%}L"vlיxbS?^]Xܤg ?{*Nkz.0Amz`-$ >:%:p2БuJ"gIW:\e ,#vC@8TBN5~d}ky E*杘փ ne0 v*"}xo<5di&P詌 mKе 2"XwPSr>& ^c a{v]:| $$hL)Xt,!D-ChgW1C;V@jNBH5C.!BjFIH˽euAKgx<,3 2Б5+B܌5v"XAIb&&SX(Zbd~ȃR!*8(ѸUk7ypPBn}j8lA ډ"Pk>Z$/h},҃, Mkdg gVTʵr'ުi"`!-##6Hj\$` Hsp_\p4W3ĐVe6y88YHF1ø]Ay/:n1 /8qa ƶ‹Hj|ݏ]Ow/tz71SۭK,Mh{>듟ڸmƯmI=ts%6S|BUE`YW"hf?Zh4VpEf%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJm%5T {|>Ɗ@/Ȭ@hVJY 4+f%Ь@hVJY 4+f%Ьdȷx~EEvˍSBR}/S=(ʖ,I3fCfU@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)W |"Ő@ة( BP!JH tJ ï0#%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ RU@Բ( $Q!Zaz2u@r @"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H t@JoK%{?GoR궼>oׯ =9Ū~gD hIdpK޸ԕp%Di8 AUcf]!`ˇp ]UF*JDWHWIk c?H *Z{BI+jԀ ; ЕbUE)):D+nHѕp W *'5DWHW匿k_kSnj0筕vdfC:Q℉ѿ2g?gե ;3`S+ŏZG)>fr9 e~S{,5E_/ d,>ǎ8MAҟܬ ^Z?q\_g9~hد17W%lIyvY_]hˋ$_ Pƪ eR4}ЧOi`^hq8l 7{5`jy똀׼->/.:5!Wr\wUǻ_~zQCh ϼl%h-**R')Vg[Zi\a"4;8JS7"%E.yR*EI1lzY,з K##sMP;rɃvǀR UV &CP2!U1`Rє06 }UUEihC+!͓M`8LBC*ʭ] #c*W_ bKWկ]rtev+CtowLW TCְUEiq9 jt•*Z-NW+!!EW \*Ji܊ *`+CW3 ]UNWJIfHtCWW`+bw(j+-j@tT{k]!Zz V@SIWVȭO?o'O=Pd+ۡݎۭ%я_N7 󼜏I){,ge:y1XLt5FKLJM( T*'f`#+7Tn8q|0k^}Ei[_eDWT*ZkNW݇) =UkSh9}vEɉ~z{ĔKE<ZJn(]]sv@t҃ ׈UEzOWvL!CWRCB Qrf DW ]UtUQZFtut%A6 >}+D EWHW> i k 5L*JΉVѕ8>k2zWn ~D#` {w&gPhZ' iqU}R;Kms}|}m*Z#p^J;7df(ڥgz 'xb2 6!xn;ɰFa?ԒEko}'U[l/c=r9N/E0p;QQ\Nvuڃ58u> ƥduUHz?~_͹Vo:KA>׾XdFd0Qz妻>v?muNW`ko]`s/;(91L\2്gp/ʉhNj:*+8 9w?icIx/+ytzg+$D,Wƀ8gIHN 2xmBe2W!lDr,`ZpQ~Y=kl6n>86%b%8ܶgصԑl?z9us`X~W/+Wyv^g|=V$O~/6$Oc5rNszjs?7敔ILL{'C:s$eHbƚSJkש`- mEDAN%pil(ByMFM 38:HMVY,JB%G g-X[.M,]d ڸkxI2qvk&4¼Ӭ³b5h{>=gcI6jS|Ԑ4=7g"yMFa'ОB_'G?^67':h;r.|<׽OJS\rgg(V~v~1떼}$۬y. nZhWm>*=|{f?YgkFS?YMf1^9֓6O&ucgهQ۸ɯ،S.GBϪwW4iry@n 'Hq:m8\lQo,#׸|^:?}R̝la7Cز$][ضtsw-Ng8l`N8! s MP7Lѥ6F*/gLld"}5۞XbM R_1ޗBdtJNET ) ¤\ڬfL0Q[cu}mمlcV!pLLP2Hd5ث+8^CN!_ߓl2OȂzh{+??C;T}:(4asfLK2蠂3mS{EeD;:}w'A/u`~rӻo4%M$=R6( TlJ&iTw^lCMjMN,RdFMH% /|w֡\JN<VXWi (K5/PX*3#c+fx@ F +4D `V(\R`0xrE.h)1`8k_x0Qxǡmgpm[ ݣZ=!Y{g6"y^.ooniܣʋEܮt3 `1}ea/0R=XYqbZ!6*< fC'~9u|.iFsfO NƤ钭u~>GiNX uK3mO+f_ px2 ɜ](9̤6Yx<9STbKR(\ؐ/"'$Klb:2}a38T΃|soύ|}|1_A|춱 ^ǻЭѭ1*}xmDĭ̒QIN1YIG@zo4SĂ6 TpiLlg(;_ fޕnے=ùUR뵑%ETKhRKRzW=R$%lq35=UOT?%-E4 %I0MM*h )ce*s̢k~kIb׮zmQkkvk%rHnbi(hb)3^-#RfSx\QLȐ+DEB@a&H>>!RXX]RMaԗN1GzDY##qGTc({GT.kfD{T)Dsx^.Vu ܨ\ fx$•>#qJ M0E)Wt|y1LB yNOQ-'#LO?[;hk6cL>6zw<"[`AKdMifd3-'ΒVLo,V08:g:[5ϣhA8˥D(p *H 3Y\@),\`EA:-/p!Щg7{uZcG PCB= Վ2O^ċw7 \q7MaL[!ǐQGm,vr% I97t7SnQYf7}<=Ѡ^^e>mEHq1dvi@C]41lEr~TCH4s+:<~Ϳh~~:? .AȎ//NpOtgcuYDg^.;ы8\;PƬOxYv#iW@,Z%-^7l$&ŨGۖDgx]j6OzNnw?I1dO閰K-n"p"|O~~#V[tA~\K|˨(OIj םm` 1m|[J1R,%^Ƚ5и,Ґ/پ狍Pb|h9+\ :~^s_SMo-ϾN/@o!lѹwvjx%lYm⩹/beb^X.ļ]E.5Ӱsny;])^En mZD虠nևhLzvg,k|&vci˅V!{wO֝[}Fc욡fnr>Ak#w*QFA]dfy>1@02 ³* UWTXW_9l]t[dC:gXTcgZk;U(QDLk N+i3춮_:n]\x }'P*[?' ;#`$=7G# Dco|ߨxttI(!Hb]A vh mwU:rUOqz!QYb:gCy J@酋 DDuX#.ciD1}7N1q x ̒Wɂ$x>y j^ZCt\3ޭ~<ҝ&z,b_8(x6ݝ1,ãBH,1b$^edB%$@Sy渖2%X \2Ԕ BlmP' /M99jhU_? d)}7N@2hɆ'!0'l)Ṽܨ"W4 D@X M-  C>+tD>rn0`) #01+)Ȝ3$n|p;U$!Ȳe'YhW<Ơo9kHc =!< cIoUw_*Y-ɠ!tz:pPV ;*s8] @iL'=Em kWېwoOpG}xVpw_׭\=Zh7!0O`H~M,-^iΧc\6ik+XJ5B(ͨ,x WQ&@pMP׉`M\unKZ)Jg1JɃ2MJQQTB9,UB6鬑9 X\IdV&t_?JGr8~(𮛶+:k.b}670[%Ҙ!o&xm "02{v^xŽ_`{~~(K2lQ@+A3Ŕ.j73b£u7KH yHܢә!!y J&Kz@0LZ=kN%_~!٭d}}x;Ky 8^a 9]n% b'*rz/|M˔RQ5FaKx E$K3hN(}ɥ.aG8{>hr٤r|o 3&6i( RFs&  =\i|OILqE Ѱ,1e5 `dT")Aꂠ |0켅mWJ&0icy6 xO=} ]c hym䁇4`MVq!Fü̆^^1HXle.x&Ea4 c@(3-qʫl)y2 +pO% OqKb%)޻>F8Rv "1FȠ=9)J;2"I1srPKq2=fs$*YGZ_xBd-!Ą ! S::+Lk1|UӅ3I0s!D$Y) s{EpI/J4_tu~#~s2o\γq)8Jcxqo*b~O>auB,w\.s2w?$ѓQ)&7I%qwe3?|iL4SaZr>w@$dK&.ѩђ +?#;j^V1 +H.2c~q7O4o^A8YII~ZwS0j]ruolf u⌖g\xHE9w"ǣ=]_sx=mipxU_uMWViXwޔƵ3B4?or:K~?Ԉ-'h?O6?%P>7/p;2,?o8vK6Ǻ4^oyiiF ;f7|c6yux|`v^$z<˒MߚLɒ@<(_Al|̯[|qP)&TuQKhbґA=R^ߔfv/h[GU6H<}dСV* 󢔩nA@vZH' :Pfa =~^٬cm'lXJ^R?=qY.-BM3Y cDJ 3"eFVN(jafFHgt23-gwcbqĝq(l,7e96/}2< ٭_w׭vYC5U@Y*La2,DkY1YMac׻3o~}{|c)-,qco,W;9̆0A3No(k-0jz<]j;p•lدtV^<>$7T"^I֎n<oⳝ!Miz-4 t+`ڼm ud~M9G֓w>!Zw?{?ba37n7-?>un.6یZ_}ݷYOW(-+\6y讳i5ǜ2U_~>>m&XM4arbH`>vg]e/Vd&tT&FhT_|>X1 ^px0q_0e(cqO4ǥcqǽbe1K/ue)RR`HE Np%ʥy VJeRnE! tzԱ8D& T(is%PZ*UV۬(J҆SJ EDZYr[03sG;Ύ˦n4O&7߾z%/:|RRK| ihN(S_~huvsNhPEVB2+*FYFVTNL)KpNKrN+k*s=<<~}u }<}t{ݘPw[bQ쪸V,f_MGjW&w2jŸ [0bE//ǍbaDžY&NʴdBjYTNdrAdZeeJ\~[e?;~XI7g-郩]t6600:idy9p7崣Uvmnq-r\ٷc\䊀 F7 "Z|+lprL H׈P i%Q6'5\FJ "0p\p*\Qz(W!p FP hCk+媇rc+֊#W!"ڧpBJp\H,6tfsapw^|abEc&]m05`p7v;jc߬CK.{an%A`!pe0~iQ; i4 F3diF\+(WNHf! B`.M"\(u\Zl䊳ݺ3҂vZn%.N%fa~.Wc\\8ce@r`p E=Ka\GF -Gㅛ-i{( rC ɻB`u1Z"WDkrCRҪ&*Wld0rE'y=H8Q QhcP;[gp "WH Q8G Tp q r$WDRC>ʕ5Oi1p=\sxMY1mb04Og|QWM VZ)dv3^ԮJmt|U.ۄVQ2 RU%S0`6&=R\aYTTH֊J'T Y/8S'[D~ŚzhXJ2YJe!IU$̡r%2Sڃz6+c/?.y"W=+3X,QӉK* PiUB)un 85Sk3c{ G@JH]` q%fhR8Gr iV gpE0rEY=q߱m\Zw@5Shf79/䊷+j߮9\>p"J媏r%b2$"`낑+A(rEB.WD)uʕX ȕԆ#WD˭rEm4rY@r`pMmG5[Rj媇rӠ[FW`ƮVx/WDUJseLǓ+ n83H+](W}+klcҿ|?S+*@mLkn'ޕւ7Z&>ja8>--c7 Dԛ7So`$WT8aB8rE{BJyʕXpkn8s^D ޏr|JYkD`;`׵\µw-i:7r\ɕh!W"վ]V$Wl:~v]%ﶣ4媇r%S"$"`ǂ+fC+QrCRlv#WE0*\vQz(W & B`T0rE]Ԏw"J>ʕV됼+,`py0CD+RF流remsO;<yL$kh6-ըc tQ/AEYXSRJ SM*c K&F#|ݿKy~#\²2/dvuVY嘬RhYfU/BdkU^KoW-}/_FD5_]:].n__ɸMj/.{6)H>Oܯ? ?L֦F$J$$AQU)u^li^޿vg=CQF`|vBD Z'Rj8`2!-/j\Z\!eJȕ=HjlQ&\ *\Q\^1["|\պ[jGk;feCr%\Z[悑+6"ڮ:UJh,$WA#Wk+L.WD)\QF$W#W+ lvF\)m 3k\Zf|+2U h\!00(mj\iSwy8rEF#WkM(r)(rG7ʿ@n9 $k4֍-jMm cRI1@1J߀N3EO. @2"o* "`Ψ 4} "Z}E\9 i]88"WDҷ(Wȕڱ^]ʎv]J+j߮@$#W+wBJۜWȕp 9 HX`p EY )U^Zt2 B``<"\ :+\Pᛊ$Wu0rE6`iy/WD)DʕfT8 B`!Y0rE]D[zIȕc]y^[$9sgF1`J͵ڴ[r#Ӫ1ZBGj%uqIz(UZY,0 P+5JD UN ] 4G/Ior|Rq>Fr" "` tߘhk<(W=+Ǎa! G''Z#|+Q\]gfZK&p^ގ%(}փrQz.T+`,"\B+}(rC(wE+wE.WD\Q$FV+.+=тQ66:rR)$`'+=G+Uw"(W+\tL3N]BoE˅Q8GJki+…`ډ2(rG,-jչtJ#h/%7&i Y'u 1-QZ2o `׺PU0(~C" B`#D&\\S.WD٨?r@iT%`pޓmRrE2+c3ڃv_)*lvб\5(}ˆ-JGڷ9C+6\#Wu=v\vQz(WNN;o ,"ڮwQr曮V>`a5{;)̏nZJ4RlZ==IgQܛ7ش+>,'6׽O|;8OgivUOnW_5>\bnҫd:ߠ4Z:릭B1pW_+puWR8i [ ?1)zXWx:xaK|R+[3H).{i^}}!UWh+VKң{RTsB~c-C? a?/qo֟o|S>ҙK=:7ѿٻ8$+=`'8<.]{.!G.'h-^?ׄf`6Wp){3J%=|K()T|q:+gIdfp}c+! حj.ԹZrSmr)fͰ`4j>ׯ= NǹX4iu kkP $@oCv[E3C,ZJQ+/6{Gb:0JHkf8}9oB<A޴\s9 Sf0j$7.(]cp^wF$֤@{ur38:Akc0mƀڄҙܒ`/fBX\}ITp,ֺ[7.m|i"%h,.FR-+wr26z=򰡥fnYB Q|n`hjA 1Ȏ ўF >B.聼0Q9 _ E*Us`#L$X EA)1أԡti~mmWJʆE BRқC:_|"Hmbu2Bsk(|-s`7’uQ6`Ņ%̆4<ƺ2?K+5Rl{ XU~,XҲ1 [(~>P6Pc.roPPCohAydSXbX@?5:\v|Ӥl`_Q&t$6G|tZ, |FEEtÖ71CeU26HQc9MF2mvYEo{jPB]:=n +Z#q2F[߅Gak!$ ( "*ڕ"M40~Gw R5xlPBr|\1fPTԃ,0F!NHe]/vݙxag]?xYsN?jkU0o3?8;5͈ G 1=*4*KP˦tdUm jmlAhqU +0똆'z$; .' ) >@(E&rZ#d^S1P>KÙ.̓1ZxxLPBb2$kuk+w<o 1ttXҢJrC#QhzBb9¶l^ A; >$O7~|sv/=̻ gjL!zse_WqӋ @?l_ݞns=W&񓁛A.nn8c`3 =k(]˿cgUCŨծ[s5|5ǤͨyHY#h4v˨ ƴ[YFG%з^TfMj:j7LJȀ=a]˩4dsCbѫdPi`EHN֔70hlCi^3 _B\ _RčtXzGIzӠlCi#8 8%oC)F @:"@.* lnT *˥11DL0r";@j]!T$b;^l~P]#Ź=˝U;ffuW7?1zϯCqOχ:7z{=yB\G$&/Ѱ}Ֆ;>~~:>޾9SKKh >}>;Em]V?Ӕzp='C q/Cv /`;?I\ډJ WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\}S/c2\p81HGc WOp *J1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%oj l`:&UF1\GcZp&'d_BWbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J W2\Ώ W1o Zp. h~+L^ WOpAÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbZp\0Y={꽋0\>}ް@_ﵱʧhfA*?1Z(oM1a?zu)"2BW֨NW9'HWN[]t4tp9vWφM:J~[M-8]KAv3U|Tc47 {L OLGtGTr6^S+9$)Dts/DW6NW@Sd\8z 8+NBWobA ]f*|Їo~ ^~pWޅp?Ww/㢫p BW_;|OGDW ]4NWҒ+cR;k.c+Fc+F+V s|͚ 8w ׋ެ? >X^vOvXͬ'Mnvam{E(u(M01T҉M'Wnܸc?~)\}{qP~qi1 H"ݱ޵dٿ" lnh! ~ &e&A&!e=EReɴMDD,O=unV}1:bQnbt!GwqL(LixyHneN@iʊL%=ЪⷪfE觼Zh; PW(N'dJkˊ7U0,UbA) )߰w~SV~{85O;7_^sv=Ϡ4ЛG} K)t/ g7E% u]%|kQ1 é]tEf\"BW -]^!]qF},CW ͋F@1:]% t Jh&LhE"J5h|ta]]L2P~8]2 &޸yvlۧY3"H%wYrZ\ R2%PtC,xotCz_':$k)nx=Ab}JPG J $ %C]BҜ?+L ]%7*U+@@W_]>zujD |W\`tv+5վ ]] ]%Vt I>U,io*դ/th%V]^!]Q=+ȖF]%tPv-hg+& W}/d7/d7sW eז t,t3?gȼh*/tJuJ(ҕRk#R!{CW .}VЮUB)WIWť?hv鵄.- !cLDbTfvi7ED54ur,49j'%k] O".$[3h( B|z8>^A\67 w?V;mIWߜM|'L 0#g \T`}y>뚯j7?N)?V.T}eRң~ie}?x7 t7*t\nٴϧw6(܆]TΦ7Kףn b7{6}Zյ-|j|w?챛%e+C| UMdHdCT k1.ARhK=!b,Vy7PdV8'Qyf.ShsA8 Gge`*O b# 8M[rneZGEd!w9lEbHYk IvK?d83k ]co/,wazOf3PW_4?b~a{VƮiYZU[5}͹GY<,.疽qNO2(B$Z// vf:M"ؓ<<riU@..W7Hu_T˲=<dC᱖AJ7?[ŨL-zmЀI\~H׿#gnGвFb_7habT)컦oL=A*#W,#$&|m߶$q'0dx=.W-O)1\Lc2=YM4B" ?B~1B[s{''a}IakЧbBjheV'a*Ljd<)V2""&ZH0<)c"-lB*qټ\^Fy19o@nv ݯ@3OwеN.ӌyϭn,Y[VEq@^rcHxAD.813F"aY!-(%LV=ịUh4BkM" 4wF-ĻQ9b0XJBԀSEw3}gߩ?.|7DWz_;zGb*'",DaȲ9#3h4sI&ι*N[(.wMƂՙ6e #-Xx(_c@ xJѡRt"dILíbFLF9%Xhd>**wKcJjULG C0-zgQ&b=6"5'LDr58 (.iS6m_[SRh {:wX)Ű#)%0A<32{5S!8w\rI!: 0Fꨁ! < jx9N]`Di.,VJ\ˎ@")s@UR]Hg[̲I=k#pyueU7|y ٢MT'w\TYTU6׶Ԕ&riwWvGۛ5 \ 4)kFpR@qHbg6ʌB}|FS\-ȫ ,3flgJk\ؚe˅e.>)-Β2>R}Nt[;.4NWJ *54Jl98*x̽s1=QcBAi5KLhIt [OPh yI1"vHHLBDNڂm>k!fWX5ؗIˬM}oGu `jJc )APcU>\H! "dPAh( ('@8G[ ҴN58 P1v[}̈t`āxT!pUW4ERW"PVQ+ b8]S/gN $53/̈Έ'^,ƹ4ٚˋe^d/xcC \hPDA ( %%EH*Fx)x+|ؚuˇe>Pju"Vc^sdFQg=nO_(Q*^[~gڮYgy/>II< ,k 2{c!&=.o+seɫ]L'Y`oTȧP|BNȊ#$@R.Uѫ+#/HnhQPEɋʼng# 1R0ÉV2 ԃW>Q ,H-#!bcpδvgy#QsԓT6*vFoyHs SOϿ?ء)Jnx)l}} >;M3bqOǔ NBbރ 9Jo-u^/t-]'@M',ZDAMeê2(Mi(k3QBO=7Z *vvSaf#glz5zwh: khG=I3X=-p;=H5凳]fb7/O9YWg~xZrP|e0>_TzEc7PtgՐ} j?ezf,|Q-/PkTLЦփ'\ﺓWf9{VJ`T ]kw%Mq,IpCB,2C;z.ٳX<-T IO2>UB i#33=QԱ;F+׻dCG[` \t嗪lzy]o m˯~0hBb`c { W\h(RZkra܌F1Z%$(K2Jm"ƄcLXc,XǞ69?* 7<,,,,S(MUQ`؀bc/YY#^"ba3G?ɪ*;~yv@ŀN!%PK%X1@I%WVX+bz2x6ÿQ_OSVI5'02<*6UY3y5zpfXEpq79M+:MBOnL~`2@IbrJ]osS1[#iy7ll(ɘ\4rø؋2i-( ga !vU8yRdQ!/GM)b\3e~ uonJ_nAA`LjB!zg`@P(YrGMg9;`r%6,=DDV& Y@[-8= rjPk4UΏg;i^Hd+xD\b5MA4ol+9:ɡU,FI$97ol*' |?~5U%d9YoF]+Rf}IRybqKY(md5SXwg2VDȎX/VSQV@'\%$&;J6AFO6z)(])g9nPz+ß(> nfSmCiZ-晾m=YY躑֟zp{ZJxZ;S1Z[FmhlYԔ%3`b@i1T4cފ.Ԟ6AuCS,{]y>b\j*+wӺȫ]P?{[o{_wϋ-Ӿl|V=ozGy.HR>ܑd u ]-@j]vFZ;BZdM'CZ*˓cBY ] @w @d(\e8_ u}X/pH/k%Q+ 9JXjB ! b3W2ݚEhF`c-[8X{`Iu3޻ϋ{-dÍSwv=@PoGP/zu tSjwT^5;Np18eˆ K-ϾTIIq>8Y D!l.Zi4| [s3Td'9+TsoIY\)!Oܲ%7nQKfX bҚ3Ro#縌F9.2YO<מVvwی߽jwNzח3aIR `.l f6wc^>Oӹ[dz  *SR4@vȡ@[wN8LhymTsUllKC Í 8kk(^XzWQqg1kk!O~xg`[5h]l *.ȸs4rǝRuՌi4?+ jSN؋1!lj`Hm`={2!Iyleٯ%R.`$fjd+DYEŰ=CT FQ*deGRlͯqmU>K'gWф"&!\llXK]ɽRj줦Rʌ д|\M[kF"jk|^HMRr?9z\r$VJ>^+I%Y&$co2n&㿪A}74wt%lW[;Pqߞ1(S!KLC[y$T_}>qߟ~#) z[c!NM|@`a_fL`U+z`ܩ-OV?cr!!ܛf{е'߮=vke{AsG.ߛff[㌎vwYIaq7Ex( <`Z3$z4$7> $ygDw<Ȧ~|M0E>XeVE@#b&VGHbyDS񜤚 lhg͖cI8 d|s#Cȹޅ&Ðh#R9䴫-k}kT#Y gG=R6>}GATi9DKCƑO fnB,-{)l&Ρm!\% =NGHtqA%h>Zjkoe;$@&ɖEZkI.cȹxCk?hij3V@Gɧ +72,ݻ8y@KbfT縐1UZ#=B#ofEx//˽|tː7jn vovd`Goާr:R"yxW,yS^Bpvt2RLw>pLCq)H-Ep%< }OZ v:ΏUl(P؄13zq詐PE ֔=+FE{\5QAR ys|7&f;XM(&&\<2&GSi.=ˮ9J/E7ͧ"G աX)` Z#˞-11{I'^pqK8ހgkGFWh3.O;I\ myhNԁ a(1.idQOOt~Gxy:ܱގͷlc~>~a0RREOO>I^q.㤐@яZP>z7"5X*" +*md]œ$zR4ewgֳQ+D=ErGEO.CGh0`Mj :Si@FW}l iv=o|B'f՞;:P}pU'믛jg^J^vKg">ٴV+{ӔyϵDWƐK  s`L3Qeuz0_"pܝ=-yR&*m_o,+avlv 64G v޾Q?\0/eOGt*Σ~^aяzzSjj1,|Z.  n- ˀ(X{vO0w;z6cG[>z>EO -wG@L- %/fo͗tO:1g!Yj8Bu})ZKKu >޼pv8=!>O6 Hhg˘~ ƢuO4IЧ#bH@C2$$?8͈2Q榻N(ns(qZ Jl+UT͋ȪBg zR6 {׬w_mm.;y{l&QM6\95JfYmQK!(5[CFU1Y-WZt?=ھ=8 зՌ}j-2`H}XI*"ɩ|i*_o[딎OMS˃ @A͛{iQMK?Nj Leǣz~߻U}>}㬱޸SW*i\7r܃sW`$rU+m*HNb#b(frɇo/nGi ty'_$U{IW;uN읦&br o?1aPS Sp˃[^}=.iVv&>8T!g|Q{7꽋˻kM\~qV'oׁbT㚘ϣE;>z{x5dW^ߔ֋{,(2v pozfm3?Th,NV`_ŮO_z oIr ag].ܮח=$8 kØXQ/OՐh˔8>3_O|StvyGW^'rUĨo2o3vT|4Wf2o'po4;NLDe|j#AQծi6m5E Uyjy4uiuU%1`,JJ8F|0}PRUծ4*ZhOlJӸbYGT *mAVqWw+;8&TgrvV?t ni|F/&]-.^,.Oxߜ ~9u0R^vW.yK5Nնtw9)>]΋ef;Lni-FD3D}nK5a%:3̒D(OkI"$%zz3)ttE1"\]m:u]%F+^a+qQK>u]11YW#ԕ ҕ+uV6uŔu5F]9ڢ ]1pbtŸOZ>HWDkLSn٬28$]1pSd\D)"Z)c̺Pۍ7ddDEqV}UqƻvsOF-G;fvO[)ہХUVJJ7m E<,>oC67Z^ڬN{."4nl}sfq!+NWʏ'/7nADcg^?b~<:_}M1#lDPnwJPTb5Hz@ #$yp9Ȍt3covra>jj2F=mʡbq+ŐR1RGIbtE6 *u]e28F]9Hآ+NIL2Hhrt5F]y"Hۃp]M2HBuڜ8K-Գ|q9n2,7*k.ըg2C4 "]h,Jk:?w% 6^*)͹TAñ#&*)w%խdg4s糷Dě3lSNm~ENB,ںֆ tap6t[o}v@eM.Qb0mS[)i.+ărYo\~/m{8יupP%jyzi.\EYv'A B`Z+rD!7"GԣA~P]rL1)buE>Щ ]1p3"pQəm7+1H9q_?~8 ^ЏҦ5=YC]AվYGtGWp1HѢuŔ&d]PWݔ+6qh2)F+㍉ZVUOܡ{b"J6jV+FÍRtŴA+u5B]TKq+1u]1emWcԕ HW q+ 2e8uJn;RV ϓ/NGycjrI)S=vHRq[6Hw*EPN=FR57"3eb|iI2 y1.-K]WLs5ghFA"`DW b@2M~$SuJHEk CGWp]hHW(!1]YWf=tΉ EWDHbJcF+@;1b\iK]WLi}ue銀q:b+&u]1ejd]=,IAQ)#FW]1 )ɺR(Ab 2ȸQKb"JT}B&wm<+1.,ct7`E\Ơs. ΢O&6'gr{)܉nACcJy^Qwn*ZPMvrER#G_SnlkrEEY銁vrE'EWD :\WLs;u0 *cq]1mL(rtj),8 ~q`/Z U?Ԣ+CW&j߬*ASw h1y]%[gՓJkG1bh5uŔe]PWF@`?`gq+:u]1%f]QWV;^['FWDWL )S[B?W^ݴnxvZhS\]mc|UgӼ}ɛLށڥ7P?;r1X:GMN Y )95lbBVu_0F{YF %銀)>+ J[fcJ[Ǩ+V'HW 'b\ioFWvǬG>n0;zm?\;~~֞-Xh{f]`cT^ =:m)F+9+OCqM+:u]1eueFReq tue]P ib\-&bZRS]QWE ]1+ b+E2(u5B]y0JrtŸVI:۬1 X$z%ơftyMPWZ:|>ר5鸶u }]!38 ]".3փkbRL<:;9ۂ֍>/.'ׯWt̽:iNPb]%+޵P'Qv =et[qr2ˢ0x[G>풶>hgKZyQ.?mh41i~_˘[_{Xחx>xUj'IY=YfR|xSO+a|孺Fy_:?ܭX&tHE5=$ʻoo=n˖0͂CBد7_|Sޓנ{do)%tӑg=οU񉧙L bLyu>hԔ\/%6UavmGSj]/zJ_Aa,}c+/4\!inUhr?h%/mJɠ_Lggu5-n Bh}m۠J,uҗ ZYFFvuXuYbcTePTƨVnNPeЁ56վ?fӼ_р`[㫺iUMB!F:*,EJ%8tCVZ?$mhib\2e]Qf XۀdZ0m E[h t[iFCUD۷ Wӫ3 uKm|hPֺu !3%ER ]hj除euCYfVkT)l])+ݶuI[J5۪, C<{J4PahuT[ umc_7#E |HpX,"}O>}2&H1WmhA:DyK}TCUG>7B<&p\sϩTRnm5ljIl# |1)k~ws =i:wr38:Akc0mƀ! s ՐR AF5!BRucк D@K`1x{jkѩF9>tF A<,Tق1  "r;VAڳZaB BQj٥=;mT&˂OhBZ%{l.$<&XGEA)1أԡty9lxk@+KQ BJo*C ?A2hEp-#1)r,&Z XqkfC4ƺ3?K+5:xXUY%Aqc6 6JQI$P6Pc@n7((7fpydS!8/?$1x X7lGDv`ym@L`6p(:\;!Dt:&X(F ʕY[4 U TfP7"j,8H&j! ==5(!Ȯd܁E7 +Zw e2a;o}0k!$ ( "*tn|5<cn- XYH8C7a:?%J`jMPI!0μʨ,/8T l 34t~ :jobE,^)(J892rAj ڳN;!JPA/u: q R2֯PTu;3]: <{:2ɐQgk#ԭHw 3hGx}`QQ,TG7>Xż;CmG5juzui`M0tϯx}_]D]eOTG<)}|6C@FD4=RS.nyq:EjC$*E5]%@_Jf;@]Ii(ʠva>j&.fX;6EGB)&Pk5HX}Vvu,bGՌ>#xX;Fq5ވQ0EVB\5vK?zLNBi 5eH~*X*A5qD&"r7Ga1=TV( Aw8+N bl]NԬjhG  Q{YdE5Ԁʬfzx6Rך󭷦:x$TdA!퐄M٤$E}>~F9K`o ֛>r,al}y۽\to`|9o\,sW&[L;`@{BH7?9hF֞ak0}Ϊ4QG}Gj^kIQ3󬑲F&h4v˨ ƴ?ˌ_ @2#lRӑväDy SַJC6G=T3ʍ[x_{d:)jVd*q(uAA0T=HπOt4JyHUqÞ8TVh7"$]:Ԇv`!x0ԋ* bpt c,1@ZW󨣸I p֣T=&X:W!XڨbjsǤz54% `m-Vj|ң:jփ*H]6 |tgIgg &S@Z tmK&}gǽG&&×P3x{kX*C=*mP z0XAA5LD9 Zvy/FF̆ M F:,`}GIAنFp*qK^RZG73[1xx@A4T: |>5fs˥ZpUn}4& XTdǢ4k&) T$baBeqf@BPE9;W]w@`B JL0H5Cۥ^۫_[f)f-i: &p'M([wĦ E5wi,7PǽEPJarPtdR7#[z ,~8۫,[,k9_li|y+˫bӤ7n 0 ||s/[ެpWۛ/3?N ˂|6N @@I$NZ'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q8iÓy+^jʭ^/ ]{o7uZ=1aiN%1..q h>yPM11.}ƥ?}%;a_VcS/o:jb8@Žʸˀ"r|vQo^޽W_z ~ 1 FُxԵ*M(/8SO>ajV:[2VN9_|WpMbw-?\ſ)oZ\/WCF5v~OZOeE xF&rO}4J|*9f ⋖+9Fe3X`3+͆ ]1M+4V ]=CTI͈0b̅mS+tFˡ+ȡ3ky&t s_8DWCyj@W$tC ӌ ]1\BW֦S+F3+C]1`fCW 1OW23+Us+Nf>Z3b:]1S[)tBW!M+gCW̩R >Gl=#l>?C{Š_{!]~ݜALz6tplzNBWϑIs{^"vf7}u}QoA]Wիz@[`v:ychg/)uPM.M|cr%єSJ'}o>HyO^b&al21A{3 8ͧ\h.zu꺁Q-OT3+lz=bFw[UJE7#`森bQ:]1$We9LO8I 'aƜiۑ Znm "UZ79GLn+ޟm?ި\_&&|nX^{k.򿽛rh};ͫW~zM_exLc2>`o>h&mvlm>v~MV]/KgQ~򫻏(w@$ùSRw?\g\S>n3<#M߷W8s>" u"X3b{Plf]~}?v!}}1-y墳C2(PȼgUm7JkZŦ`Z%yl;]~fO e?a/ޓ|8u;_!,ol*c"N<_BcX}pAMo0{o|Wv5M17L뗗.UpЄ~y;A^A|PC<&i;V]}w~0Zjo&<`bz`i޵q$B,X,.#{9ov{X`GŘ"i%G߯z̡(i$Q6!N_SU}Uը5gȋ<(E0GDkӓj\巩 烑_{:_`&p&avlN1E|B5W{E}w6wWP8nBw5<X~qLl_! ꦶzpN{/SL~8,<#}ɒxE|dRRΗZU,%Ut6M#= ?dsלեٰެ|IN{j] }F W8]n6#z p/gȀ9n?r`j J6'Qz9Jwٜ͋Tz z1|%Ϭ|3UG>,',/oxN)Mr=+;{z t lwFZh >fgQ q84%x@M;<]VLb.H"K8L2ya.Up@n \h%mf]c@\mh弻6<ÍN_ S~ 3fT`d6׻mworW?7?ZV[ 9llj+=i̦7m0}`~Vۺ=|ӄm"#hqɠRZkO\q}XY%\Q˕@mj!~[ц31@8SVdVYWL%,TQLYPpPPmOC=xd;#TޡKZ8Pq VqVFb2'sR3oѥN%7}1udMV첎E88b!X-Lfѱ i&:;!xUdNfU%iϺA_=fr 3 /GL)qvks~Z]N:Ai5!Z9f\*H+Ù/ !^$xt'A;F=R+$K#SsrBhhYZ\D8Â2* ۭAܪ,MLIY,ToRWA mh/|fȞAuvD6q:< »o]]VeS2~c|9j8σSZMwdUyCȵ1g, rh*#r 4l&]AwgMUco٭bGcd݂oyldY^S\X iM,Ye+ZcD V]fc0x?Ҵ"$ F 2fH@'ɥ}u6֗"^0@K<1aԭ"紜V9713@q@ct1G-@~i9v.括OR6S/yJ  3iQ7{ ?N)5Z;AhH090Dx/\ iB7;Lϧ.G wDC570q2a'﬑X5I=#x~rnl3 "1 $ulMw ]x[$G& 7oVsOg{M|cDkzӢ gGQٰ>{&:r֐yfhSpA-8}QelB7kcZ͋ظ~?ylM`3nՙs`wg󹕰 Ho4}1k"[MG_2okimkKJoo鼭Fϭl kRڄqch,U|<^-'^98v nqkknu9ȶV*"cj]ڸԑҰ&xڤ=7νb6sϣZzɖJ Ŀ Fǽ88o_o˟yo} zoh# cY܅`'Mwhv4lZ:u{dºbirK[GŬ!ZrM_ XgWCMģZ֫&1A\\R/-B&5?_坊*^F1!N)zeGAxi06Lib'?J;G,8t,y&eqd$AZ!}byv!}mOlXўpYy7*B"aH DHQ2Vf{c&9Z!g;0jS@[&u(M?10LMT1bGf5VVA(>||߁/ZG[%R+;J݄\}q}q?~pbXV3mtk^NpI>(Je=w"Ȑe㏗>=ƭkҒo:C}ZCXҿl;rc}ti,0ZS2 !eVN92(τ5G `cQhm{ľܴN+Եؕ@u2ofS,Jk4Tu`f:gi~]ZFv,r_xYuTC6*'$cOFU|vU {9/w֣Nyˠء]U8BZK$~q,6}x.e]qTs#*52eAȉI'A`>LZhRo^r?9nI^8=>ڇȽm<`'n@~ۃr[WqRH@*k¿N_UDfKd|cc3&L@-!g&g];3G8@h!u9ZRYRG7"њ\E/mTBV>ptHy0o}|7 Y+:%kRWl sÚ{`q*1&Q&PJ.c D8AUwثCC:D /ro;2`A/掬AhʖeQۍ&:$BBq^UO *ZoGM6Zd6GLsT> >Y.s HHBP)V!SIԜ'fD Y`aqdND&FR:cJo`=;{᥹KCPxс9ܔPerFHLJ&H8')*t~U_=@z;20 HT^<&2dr 3 >:Zq6<#Oz!-_̈́!@t%Y4ɴ9'i,h4 Ȩ$Gщ8%:vv(y[13M`A !A "@\r^": P9:j[w[5]u&"T>ŻU5K{l"a%ĀQ 9[RXmn#GOw oWmrw$˧dk @7$Ek IYoCQ$Q|lsaiAB*12.t+2v_u;TcP"hMNMUԯbSO$e-H&$6i3~YeMTg*V3Uka3U+3}ՙDsm.i-8<|ւgO+#>v7n2c/ΏmJޞ:wU$kI $sd XP41:fڄq $_ӢQ҆e)@>8->+_(J甂\sq$,Ymf|/ V,C[Wެ6eؼ?XV+"K/㞫hae@}E΢~A@I@Cr]^&eP~%{P= (^&e'E$:FH$AW|Jt1XNWQjtDBQ^% sLI!J]P\Q!ךe@em|p˧G,FC(ʃQ2z -fp4(覎JFD *+>R+ULb'qJ>7vfl!>i)ͫ3yGl$J.~‚xJ*z}Qquw署iPךw dzh {TGi[֒b?ɏD)KX NLmX bIIx< x,Z*>{pXBR/DGhkj[) 02* blrg+곝_TfvED#"n9`JA\Q66z{E0_ ck HYL`|SD ^ aؘF_q&aސ%+&DeNRn]W9[ԕ8"~Չqqѭk:]q4E3∋F̒R%9X \ql`%d.mPL:hJ6ӤF\|\]iyqN9rL{ e_ o!Տ-ՏaKcx/D.4W?>v2Q{gY?\ f &/?Z>Ϋ%>h/HX[qAK b\R6D*JEIERf8QZ=|SOk~`|n<mQdvXowzoS}"ڭ`C~xOZʁ'T-&PdGp}A_xz5@kuAERi63=3"׆<5TR"FfG@(Jo RBYrX&<"Ot=i>8xex^R &$!d8Q26<-Ou2&32:zZ qd!EZGvaaGو/F[ 0G/(}eƖm*9 t!:kH[@ .㏇񝤺 iRo}`t>;;e>"|>ס?׵+BZ]ܛ9)UvK$(sZ,q2a5}_>gYgl:řn}D_tY5Q7onw߳Ly`I$5cWv. gln ŲjLqq354DzxC~ZK KThhՏS?MvuC &c4W)c7nK7_Tw" d25nO6)nVĈdM'R]{]E/Cl#TiG>-Rh pvaR7 ı8=>_@ `:w&BR Bk;7;]0AV>\݌vλc+a k=7|:d_}aG)jDqfUϭgr Y`u92d}YQDYhPQ Q7?4Y0_2|}ʠFc כּb_RV6D6LM$&I k &A a}9mllLmyc>}+PG''7w"Z1'o-"v |ޱ Hp;(dsfg>#e:ڏu8_뿳;Qլ-U974D̖H dLQ,OTVƚE 4OdS#j·d(EA"ݙgTP!*ٺ`+sH>X=Զ1>][ѯg~Rk_3 U\]~89WI|(6׽6kYNf)MdS4I< >MAZ-ض%MYxH$ft!',d ,N̜-ޥ {):|㞒Tt< t+N")UDPg7[Εd51ua l&/1K(䕫+G" Ȃ"P<@e~n4>:c:iW"dO%'T̑mqSe=90?2o<*\b[upbиJˁ)Wr_jg\ckN' 3Z?{\۫M{Wի. wEf}??`f<^Xsz޽`?>Zy]{˘]9Eo-ӉY R~Baw߯\I|$'b\L{lkyż˽4oN߭ż|Mxav_d~qRDZ_,_OK_0,K&aomoOW⽃<_z6}b\OV}Wk{CLd2ǥ!wvȘ҅}g.]޳=K>ΝYv'm]$j!Lj&u& KW`VƸ= + 6 ߦށt[@oDzY"n&p>}9"b&YpryDsnWgKz)os'(ȿ{{w/.Oo;_޼B, FelT n{9a[u=RӶ Y5[>6xݺ5ܖ_B 03o`:/ZbhgL&e&@3!Ϥ?]F+>}ɨn@`ܢ'\\-l 9@n$J`SFeuI6B\gi~$RT]]Qovkq" ;}\]u= ׏uՑs#kYuxo6Ҍt^ 6"\ \tEVŮ+*骇R' kwĵ6u=$]PW R#w& RI꣮-9u 86B\P6~]IW=ԕ^[HW P;*6т]WDi+ƮX9 濿r/hc,X( OoNJ]체yc>jmu-otFQVJeEg +W씱FI6:=J{@ʧ¡cŻפ(MセޅU"~+-v!%e5)e#e=>}7ju¹xfd->Ui=k`ԓC <bS?G &Տ=x!hic\6D?Ǔ(mrt%7 <B=A7U'`#Ցu W9\7J٤)AW2jK 8銀`+ltfk+:骇RAOt6ұjEWDkt"ʐ>J^9~]!p{uJsƯ+t&骇$ q\tE:Į+m= 2ptE+ĵMuEbQtC]Y+|tEq2Ȥ^ꊞog6in!!L|m̰gLf`Qvvv+^00lUn Z". =2NXF"`çC 2ҋt ʃ֌tE.zEWHkDR{^/GWjЫ}7G v#= ׍ڸt:J%]mz:݀۫Ů+ՆuE!$]PWJ{k<#]!;zp@7ZҹJx 8(FWt5(q (Oꡮ3 5]\tE!+؆ړ+GxI' TW-]Xti쪏F9+ =pTWDDB/I:2#ߞSl6ESjGQj=S T k!67E968LQδ?39-6XF=9|&M!KOh'GRO=9oЖ=Wx"ɉ餫*X+v;.BZ%4E2UW/GWzaO[g7\yd]uGZ5%D6 AwЕN6 #]ltJ."ZbQKꡮSzFRS].H."ZkcQzHꡮ3p "p8`spJ;+]WD)}UuCz:Sؕ*7tFil.⋧7rvϮAZ΂*PY0Ff s[TRY/g-O[.s~Ju^a._$MOnώn536@>&&ETUҊ"VJxQDakQ*Me(MUyu]|8sD}7 TN݃r٤G.n\6~Y=wyL&z[J3ku*Hvu.Ms2w9/Nnc_e|i3Ik|?>b[DـB6ӃWZJ\G4R0JV ]g_kFkN82tݧ5y AU=*DWXvOR U׼z8Avw+߿f1i[ʆR+Z}M6޲i29u2`<x,ynPdZȧ 6Ӑ9};nM]6!s6=M}^e%5A <FP+ݿJRu i:O`*|A3_)%KdL}7"r zϋa(=1*J ;WgcC(̹ʋ t{Pl7N߽Kf|^YhW rJ5v-A`Ϯsp&8ΩͪorQ ˵(fٝӫ7nmsq)v R* dZtN(GɊVX'QP2-i볪Gv|Yۋf<ğJ9u)c]r:~(=x*6SL(r<,=,W|bv_Ϯ/sVA>.sm' R ïFcQk1dP3_brPnMʟ}%_H'=:'<>qL/acԍA`/n j6O1wc2ER7 )銀x."Z5 -? tEV:EWD!v]eHlz8ص`N{nU'ڽZNW(}dB]AնWptEp+=֢[(Iꡮ#]!с."ZcQtC]am0pltEMgh5BJ[Zs.57oOhLpa RĮaTiLҶwQ_Wl4].."ZbQztG]c1?~~y̛@] yP6n۰۽0pEc@|z3;viS@<C>2u2$"d#3 YF;e ً8ٗ2bF}neKgd1aVqע.^ rS) 7?UdȂ ȴ֣ `uDy] m"'0:(}L}|O*viw|=9G0v`{{O(]' XjFW lnW~*Q:HꡮJrJFW]~2Qtbte6 9*-(v WquՍXF"[tЕI6R++<#]+=hC]WHdUum芀b+-"cQjtC]iF:ip+M3]"JIW=an7h.BZ%m"J풮z+F +FC E+ IW}ԕ=; 6EWHk"줫*h;}̆O4-pғ&QyB:Bku/A&@ 8 ΋«thBkFuK) gF?(c[3 rQ)tuxz`AKN24].Q-zIW/GWva`nغ kn=^b;&]mziawUW<]!VuEz+唴X}ڻkn6$]PW FWkh5 S2;>"` i3H+xNB`װѺ tC]YҌtEXtEP;:Qƶ4Jatb޸Sc2\.ѠaNMhlB)c,=Y+c]vųJTW}q#|EUz ,.Fu*[Xg#ׁ/`?F%<쒛 9~~Za\p? 'wd&_}9uk"HdϚL3'^LoZoݳ (N`A5BsD Zb4r #]!Wyh Į+tM/uWzC@eo Na~sԽWժ 3'֝~=f֌cO;4Vg:z_(N2<1TbJ53J=*]ATb>spfOJǾb&[*0ϫŗ5btv3d#EewmI_aw iUC.^jw LR"HW,k=bzbtfo=z@}XtT:0_tyju"Etww^8osZ]כuƚYXwy]mu~ԛ~w2w8K)IsDˌecYD BST*IbT0,D8SϺ D+vEkRx,a.TŌ'qb15E9ʕX8F%:Sy VRw{u?މןޔ: W*do?m.Vf$O&61-}fQ\yozmjvbշ}xG;o;pq[;ӵ0Lg5E0U;۽UT"(㥹57Y,io't)W\<á1f92zr^̷qt3њ3NƂh!._+v%kR #*fcf mńjMYh󊱮IE鋓fv̜ SrkLգĄHԌ,+q&-cJcqg>3usqf FRQv*@RhWkJ-O臘D?kH φoDZ0ߪf[s-)Xght V,g9[Yb FI+fp7pd4N+O1֚QJ_[.NhcIɲSlg٥d<5X 3<)΍1[0$1U^k;= I;@웋ϵLYwuY7JP![VhH66XY W-8u5ITvn%fu2V1yYYlUv!gi.wdbմ<\9%䎌qv6QfӢ {,amhZ=tܛ&we_tL|a^:4@̀sp*0E}߶!Z/>(OHrJ. Lpq .hY!O,}.{:TF&?~L$I$hl @¢,51Jd~0~+] f1,p/nAv8*[fo[_B$" d=:XgJ#bqLt$[HRXfO (P1e`L?m^*s%BgWܣw{F_nd 0h݌dyKHP& kLWl&zbٛ`7S6{؋.po {#Xto͹M }/}i.TK7:,9*w+V8릫Bɺ* n<F1G >^SXYzYǞ_NKʫr;#j_S !U\xbLi1ѼbX.4VF'8.Zgm=bpM쏫詤5f=&Kv%I1YG]8 @g%XӺX|h0 _'jGbM%kF_jQ Bl?wRt$YקR]W?Oc޻Bb{b44ӯQr?Qv~ՁW֯~u}3N47$I)z3hEVŊR!D o宱ZeF?-7 ۿBZz(6}ot{_cwA[,@ۛ>a}=C^`*ϐֱ֗/}y:tLXw H&94Tqr™2$ लu:|oaJP`[غl n1A=F x9$Rad)D`P7 TyYͫ[ǏHB]ߩ ;GV¬p,8EJRojH!-@"3D57QSMcS82)Sړ왯hvEh= {H .kw^;1P9hU9/ K7:/s{V#;=da*.'X]j-^~ Б%ʣ/&]xEú 2̐?\1/Oވ97lӓH̵nh|4 C1a:{,e% QCclmÌ"\״nc0h6|u6͉$E!}ZJsx0 У"iʁ|'˛(u)H8t;.* s[IyT~om, @GP3DP۵Q|+>hEDlˑ' NO91*.Ki]~jM$4}" .x(sU1YY8}xbۘ1 g_8C8R9.vִb:HY@n61ϳDc)5"X w:ۢ}3!h]l&E,a/ ,‰Qmk76^z3Z$IjrGr,}pb>YlrJxD ~d )#Ӻ{''OyD|s/*`Yd4g9%?(#)=)٠gӴl-B?F$ODj&#ħ'Ӛ98= ",\K?xz絭%7D=8a@( Sd!驀~Cňc,?E:XzjaLz>԰vU+Db/|V~8MY7DC!^}:\e 1|rީɈW|&!aN¦{H]{z|1p YC LX~2!JUk%Vx04J(IL8#Q -mA>A֚\$FD6}%p{0%FA.WmUHs{I/ WR?n(皟'pD{ tf0\G;]dʣo1(Wjaum/*D>@nۼcu>i[&(L⹤kx gZBW.ؤ~=3t_#ḁQ|''LjzFj: {e7 9I>p1~˧a9:4w~,ϒ3Ts," T DR3fT^A ΑSe+?)S>̑@ Zieq'/)g fGK`6ӏ^L!α($Hf\>4DBju._g΢4 ߓSgmImdrn1Sp-d}(&IBuATgpkm}C3Ei$օuIK8O3pŒ]UÅ ,%aX6f\=Q{p@5ѹg3D>3A*TY9Z|l*?;T}]C`D3CNCb U.@KB} *L2X#a~)՚}<aHrARgǨb\YC? v>$Rc؉m!^@ "/2 kyl۟jNDTeLЮFS5mnF$lpJ:8v0Z+3M`$dcG¶p/6%oQ¸y<|4qHB'n.p  k G= ۍ/[㓛pBT#o`܋ wmX;\`Ncٙł79ȗ'_8-ٔ)Ɋ܍J#ύ MI&eׁߝy'f6r3Wp: :6iF.-OqaA`[2qo^l~KehJנh[il_E0'E@I|L(iuR9Yf0L09 $簹ětk̛l6(K]>bvXKW>M&8fF9!6ffBmƚ+5){?TsB5V/wQ/!M;d4*~ZoQ% E 151!k, ȝ@0>L.Ќk*`6ɾGO"a7dz-#xntW Q<||c;c&eycwMS7u46ࢅQĻo/O>fV'51d2E2??"WgE^Q)12$(pH87 !9AW4EfEd2n3/[ӟK?Ve/uۏ~ʘbB|_1xqݯ-n喿g,WV>c=* @b,544u.6l?~B/_=w_VOMѤL[ pL p{ qB ~`|^.V)vfcH;n%Y66c %)aRHI0Z1BD8Hps P()9L'\Gl<*DoUbjZT؃Xvy =/-03 Xvl鯒Z=չzuuPۊpHP^M`VDyU-u |,u]r P-QԻC ;wA a[Q@XTRDP [DQG P䚇?8&Ax=$(paTR#@ }x!΁4Zk8XTsKLB"h@@S\IA w4 TH5?nnO<S;AM)k;Y}XFٴLᝧRx " H*Rp|A|^^,VluvG`1rM6FXtp{ atC5XFBL)m4.q:)7̤L ؆=ż[BB_2]@q PҡSޑHjoR8n@t $N*(8w ά, :͸4-6@PQICN/TY9E ؜bl;F9\e6(HrCg]V1ȥ1(ǂ@߸vP8=Ġ'qQlïeV2 L <Օ>~ }{/Ѓ\8qlX'=e.c6ሥo5 ,t6A^a!cy;Sô:YSzf.r#}rxǗns.N&.nB;Q} .ddJt.#j*h 0O$ri|ġ e˜gJĩW?8eUD Vn E9GacK0J_܊9F?dk,k=%} aoVPR%re}mOOF͐f{ t[87Ȧ 4<%g}hm^T-\^;P <+n-'ҰKeXF eєV AD>.?괉QAcm?XBFy50ALj.#Wdr%oķȑnd{TaK~S_5(9cag1Y˶,א ?HoXjOnؐ * JW_PU!>ԫGIhmi!J.prxZjݢa|MǢ XMc L75󍖮~dD,>c& l\7WaPyI!YM#:XvH nUlxw#޸3!<,[&p~7T=h2W< w\4z,]KrHyu 뚇^Pޠwڨҟݨ6z 2Xѷl20M`#a1 f.S^=6 o D62$<2959_a9{Y^߫+vTQ6\̦)Ԗn7_KzY>n7sHSb0먭s>ߘ^V,߷Շojjm#ElA;f5ʚ0u;.`zjE^`l1ɥ%/O؊[_wSt1JLd?䳖h,1ILeԙbs.vpoS0* OO(V*0qA-*6ߢ6618Z ߐ'.5 n\I_-d|YJmj`2^Ѹ ,Ύ&XBR vQIBn~+pa ^pŰUm7n5ϔq?uXWlNDC-&?׳C!M>KC U"y{ c(K[.IwOBq6EK]@іce4_[>&aw8z0Dp@z};4*51B}NY#D&I &ǰ¸hPQ,ٸ47/h4Ji-XM1|cF[%j)%kii/Y @Shʩ?FYT16yn{V~Pν@lD㨓~\<,!lJ͕ I'8$!bf*DT^iS)u+ Cb'$Ah(p y?W屷X=MhVyJU1lRo~-ȏQ*t̞5sș;@Qv7ǪN#&i5G4_wwvZ(2 zZdt7 r#pp)0#ȵH_K~/(O`[ꊞA{~ /@U9.%8 )mF&CPo _r a2& e1OfP)ԝ6(\=R,d5-Q(wEGQyO:m. y%#i( vwb¥07rMFY95DZo;eh[,^@졋1 #Z& F-tQ`^n}`sVnߑƋP*Z5b"b*Wa hō=k':8potF"$ GY;D:ChjܤZΈrT860!h!PiW7%# ]*[#hѴv9ͱ \SaB!;ҟu A t :wnA{*fF#1lCRنC>;JŔ2@\0e0;T zj!r) 5n=o+C 䉗8# -DTBWиvJ Ɠlj)xL ,!YAq3¼kyCG'{fnA3 dRLBP+8uyX$4)M0H iAb_vE*Ro{׷grṭz@*6Uo50 UAceisaJ(e3Z z} WJD.I&޳{ Z3r8Χ'TFqZTͪ|ia[Wkyh洕g뽋h enFEuRZ D(vskl#'5e6:ZM0|v4hKs|2֮eXN&~F?#/ O&\V-2=6v"8G|޺kb>SJ,⨥vʹ|.uQ]Fp/%}B:! 1)Q!EULҀ&1-xI\MUuUX(FAϺ7g'ݛ@[1,@}2u<}]vs$p=lVи>yXcbf ;Q .fn&Ĥ:]̭p0ǬT䱎(ت֣$; ZO[Vlup(j?$˵(9&F3۟ן "J~]9+泟%uclgc|4O /ŗq)6Sd5PGw!q{*~n$JHm ^4[*Q2]S˗%2*ٯ$˿ٓQn.`Lq Qӫ4e ]_~n͹h̋&n'ү1`?:wf8?)}gbA= fae@c O3E"JG?._K/ho5>%s Rw(CV_J4OD)U?c٭`F^>kushL ⯯}V(iuؚp"CЧ6@Cb?C+[7fA g:ր{Y9s}^Y@H.ܹd6OT-"5>Jra]Wj5xwg2\ƤͯBeQ0UC+quzN>ӲvbA;?<'Թ/oy.eكz%sm3HBPO뇇P(~*h]<ʛC◗Q(\LO_'$aA! |)aBּ}Khϯ#)ySi~~0O+y/HR^E}}*y0'd=o j// 5VAP!A0EYۤa!.ߑ(摔KBa:#YK;bWvf$54y S38k^Y%) [_6#׃U1DnJzm]5~K)_#0v{0~u0:6e->e*UB)w>M?7WZ2< Ѥk W8ht8vA;єPLoK@_AYե9i~+yDpibu,,CY<*@@첊V;u1kp/U) y-@CiV]itՋQ maXOWX_-D aAK@m7X{Uݏ3L]CdP1)Es.ioZó;f7EҝW?fCPϚ?6q3k_WDf4 LkcT$yJOKQ#FT=[F2+$|}(-Ć\49[F[­%lN;NF$0a=p3n5n#&9!_"מ]R1!rlTA4c_FKj?jqO0H+͏D$SLzB"4<Z=fDSI4#{"ƵQکF\Q4M$ HD*h|^jL(-H 1 xr!o1xH̙*4yBS*heP$&,Y `dtVІg O \[f%nb)Jf<1:RȰJ1%Q;QawN,fUY8&"'H5-;ƽK4>1@7Вٮ I3tmropgTmՎ*P&eQ@(1S"͞aZqH".r$/PQJ4V C4Svn72|#YyKm9Č[q+9lc?;Nee;f-rq-)VZIG&WUDits8Ib &1w1h Z4Xsϝ7 F-깪}wBT >jp&R+Իm⌸nWwY̬ z磶.׌p!jx螗Gﭕҹٓ_ŲLk-=V-lȶ~vXw%1Sm|/`ޙ/M`(P̈n㪺i4P<.ugxm6XRjm]{OgO>QFP_rT/  E篐b.ˉ[Ԩ؆E _}w4[l1mҊwt5g\[\DA6yek.>Ì49}G g)mT2'2/"ocmZ6'R`] ;6/j0?zu˾D|&uO GzOg%oǥ 1⪖F\c/'%m lqra/25o,k+dx`RV$Kb!0xW_EJѫoi1X+n!x[ȝ"ed̗n|f4scXJ!}w}{ңlq:[2WƭF;@2͹,b4jjS.إ]}r2R 8`4t6]c @pꢢl; pђ0 [l+3o%_1V d \(wr91Zr0_g cs ym8;RK#碯-GIY3NAd&k]&tL_d ]?LO?%vO>MXYZ4VL)b|U]51"Z^a{X6Ё4L/yǵLȖjxi@ NJ`h4W;F}ϳ1_-\֕u-0̵pc:  ezU%)=5UOֲ36cq`}d&яhl _$M1p1#h+M2?$\V }I;XA>[UJ@K-Ș.~-p\6ec.$eyjf-YO&>Y{7JVǟ1Fkz.ÝX4F-\39/. xgMwxQe ?N-nO8;*d>2:qF4x4ͅ}FoVp 758X9uͲU ۹dgO&Zx szQ)q @b(>~N##*7쌖-2݋K1^if(>IMxGI]5"PտFfdc͆,wFsыQy9 Q 5EС AFC4 ybŵC\$-`*}+HJ5y}zXpN/B< 92i2RAbHTAh'e!C0HϢ8Ȃ**[;Ut%d$15¼?O5"1GH SPGc\(=#-7M ͓9VD`To9S F s 2RT(R) PNe] %CR(Lt񡷠[W5  `<{)+8 "bDs9 BRQ`6-E;$)P0O1F9.f6lv>4@t,(RCIL SaD0ƾƱGڨ1Q_hGy>M}NRmcoôC :Fz,Wq^@9ՀCh4۟~{w[4~_h~CnX۶|Hc.GlI=R*6>M`^X[UrZ/W!q' Y P;rK+1/zo@p8ElSH,}i?N1C7v!5~irZjZc5&gM !$5.uNxMZb3X8:8í)FnM7Jov}zrއ\ZWJ<+* a52R1O.VdX&8M$1DߎݎEhۦ৏_\$0C NX),iGMZ Xb ~Ȫ+\| +7m]?^ǡbjR&9!M8VkdsHc xDFs4YB|󭤛pOqɸmQëO۹f]iq }ad_nn.jdО3XWw#@WpށKSx Ԓ=PqCeJ+)IZAZ!p,*[Y̳?&R@yOE ?`yrQ&dmKۋR# 4y$q@UVQHiŒCr^ I!hst9֮ hmURkpӴƴmpfn/͈.^g_QOʠDHq[l 77Vqi649@KF^R਱SvgW2?M24ĎFIckvr'󟺶s&DrGOd_LA(5[hwr\TyB*8s^?&l-zChQ݁oN\ZLtT̨1ÅSq˃ vy0X/E!rL;qf,q1-˚mu#Ŋ*Qlj/`N+e byĺRn5Yg' pNCi!i=H폳gAb*c;( ?w/DWFQ(yϿFh;'0"Ku2;ۘG,?WN4.y]%w5_|=ɦcʓg&g2[X|F^x)V[hR 7ͦ3\e)'Kr '!'r'߲dŌ@@OWD,!EJ r kR*1XpiQa3>˓U Q/AqFpUd'f 9Hj/FIt?]*!XWL--OIM^ȅ42C}1 /y^$& 78"c0CX T#߯tr(>x&է[}s^B NJoOi n+ ;Ƙ1YS#攄wJ{NQI)_TJ,5bEl7PM:|;9PH,J^[]S؞Fs V))< !&YfRƈ[J/"Jt\[?C3x[1 5fSUR3b%SwnaG`12":j=lyLD4GjT EGa5zZ^>jV)g6f`SXL;?SP뻅wۙڶSʩww-]q<:u H!BL2ڃ,/dhO2;UKak2卛jL+V[h\W4ey7?lX_ AwRQYTKsnN~V(TQZTyh3P֤̮iZ^P \rsq=DmȢv+[h|W.%׮VƏ;T7H9^&,r4:xǭw~_Ku:>̗=/ rj$-4~8)J2ݒ}$\~Aqr{uǭzGl ե!C31UUTwQ;mŵEhAT xʴ #|Wq:A_Fq_ۻty\"67|,V3+նon ))P6_ 6#eefnOopv?)k!圓BzrH+ޞ>81e$C48f9+-:j>8Og f7_H,[yJ ;C2Ip(7Is;PTḲEqp~5g48KE$?0ɻ,ā!:ִVx2CgL&Nv{kéY# m!tpJ9TpόIwI@t1;bIVΝrB31u"%Lose4p5xI,r"*]P);?TDi&8s2lR8 .\q ! *g@& 1T)i %\:jFT_ˑ `m-&h'=Ɩ86a#x4|Lȱlv!=T 95`peV%H^QL F a52R1-1ȰL9}7.QM-/=(c_@y9@W"|];[ϭPOEZ 4ԩ+Rf 2j <0Mr/3 i'9,6<[mw\-JK#42QfJjl  7:[>"T׾X2pOOT(zP\)h1/29s;wd*KX+$uQl25.DNiՄk8hde FL!( gw;w6\j,ڮ:O2] :5Kn^Κ3m!v| iH^`~'ָS[݆-ޕr:Эlˈ;5=Het<0qۚvz>#FƜIҎ٩ !aӸ#N kL[e"DJ˜5u.cu0-޵q,2˞lK}.z8gl ÞZRHNo5uR5ȲU{5lj˰q[zV1S b3#FK+]P߭}gh,Fwkzgӈ[DI;{)hAv݂"BH:7>ӔV=U.r*y ?"ɲ[v}M12뭄9t&f%w ~x*T+7t[\s ǹJ*+VfrNi:o{ҊM;^0jŊTrenNS14œH'DhhU+ ɥe\]ATk/?ݼ;0jWVoG?W7|YlsW'/6FGӴC_5ͭ_S+ϯ)'l&ΎrI#n1yh$uYhlNJs1m(/tC *#d.m-[zm6La$䠍VNqp ?Mk䮕2ih`l(Tz0TmA~CA2%=Q4p v[mY' ij8$<4h.zmy+wCۓiNGҋVJj*q}4O/2cfc_7 V ܯZ6>yu-pezMiCR9`!⍶CÍNiV bxtx47:3M陭#(kfFDӰ᫊@('S&uaܬ/VC^al2NHO7}`(m@ݢ.]_~xi'gQY{q뷨/5Zphv:}HMzov͑ۻc'谰ߐgҠn'/E5cyY,@Ad{mEaktX^:p<;iAGR `pZi{/*F@~!vӢA֕|gˣ٪Ӳ?i8W4/CvEɪuqМCZt54;F顱#攉cZ96= )cMk\`yƐ|-R?ȼAB(qq)G؈yr θF.nޭ=b'>uخ {歫CJڱʹmQJ թ-bVU}f}IuKjI%/d.2J /s _/^ZG~ݩr@_ңŹ&!TBE6 AtWOvLj8iS8q~g[C ;~}Kc#iBH- ç \9h8]%SƑ` 4#8ޔQӇ>72gbѢh}cGrO,UhaNzm&gDwqV㤪 ;NS6eB!;8)4Jg)N| ~!BbK *%*z: Sp ~T;]zW٩1g|=AH7v}X,9uN/[!V^S8q5jic'-JyTL΂c@NW F""evgY{{G(=1:n>hI+3 x;0S{=C`àwGpVrޢ~j x?ˀWl|IoC;x[Y Q)Żk>5yt uyb{y{C+ٚEe>ֶb[RgK?rtJlI]J޶urؠ*X^f!o#/0#ƈGtrGcsKTZ;9Y:=#xMr[E(N-\:Cb(وj#U®QN&A̖1*nU~ȴX0h0.NHbr=E~] :gͣUV 9̷9 2䀧tf[=36hu )Egl[퍳vC<aɕkI#i_7M~O2My7sZsbFz' gsa}џɆ.=Cr'"( +Eʀ@ʂGDQJI:]\&$t8Au7?ڿOM9Z)z>>higF${F<5y4R\=sunh9Jxi~6{)CE>*Cs+R@3A`x!Qj9` }ݐccPcCzWk#Gcyj@ &XLȜS@Z0S"1p֕C'>޹Cs@@X TK\L1T CͶgA'!=&d£_)8+K3;>yZE/-jp0LrtV6ZrRhdD]|HȍFT%[YUɂu0" :KGtFИ6% P+,b9($gՖٓa蜉L*Xz:x |G!-eJzIG߇?,Z`Jc5=7m{SW.ad:d *P#9i](Ln1OD˴g.9a" D^!JZmk@ 1_UelZel]hٷ-C9|(ccZ ltW뵌d*X2%.B'"'rF%Ḫ&%tUwOu!Uࢗ7(i2ubDV UCj+i aIJ&Ytj%d"qpy*Fx$]&1*h ^ĴܛaAəv&IF1 ,, +9IBIkB'%,=se!WH™c!củ"*?7B3l2*Y-TBT6(́ B1Xtcbqd h`!ǜ!e½JfBz 8'ȵ zԐF5R f`g},;5,Fby3{서uZChUG?/5(J+}x57Q[]5z~tA+R<7'gӳηykry|W j7Ϛe:>Z4`Kv)&˕`[ό'@O[>G<3Om79Ȍ~O xa/W<%ߓ)6MNŕFEH.vuFFF=N{b,=?_/Ѯ'dUAnS?{HϯvuiH==JWgf \C#w{Uɀ`ylX%Xp'%A6š|v"inyT*l'ʉůw{S`fɂ% f,ݒEV&,pB+p{,k'6rT1>)&tZ $ H֜wjKPN@Fgqu@<{a~R,5)0]%ڰoJ4퀠 !EKpt&AY4=(6M=\t-.]TFꭆS"'V%H 8`E.)Y H 2Wy$ RH O LUXS);Ccefv.3snEaeUe;#l{"kþҧK]WHە¢{uhah%x1:ق*# u2{ayc{kPH$itb [k,j2@3 ~'~`*?G/f/ 203,3e`K件pj|dyH?# [Gc#/Y؉L*'gÃtjcvzK?XgYp5K(O us6Rq`St Ox}PKNH0۸z _WXnwVl3)T}ظ`dtI+L3E'<Ѩԭ0^͜ɠ?\~ԧ,)"=YuFYjϸV^w[++6H fKn+$8+*Ұm}|4bDTVkoڛu~5!#Ɓ~FLs͊OjGTJW'~{uxQd7 RZkoJ;xB|HܤE[4;XѲeOG3֝L#HHw{WյxxL`@I%=UW@5Ku9%ۋIOPu7ZnB;353 o!L,{g`X<;dDvOZ*k |XguaƯHU y+onʛm=t˻Z SrTps* an-nAo<gg{=!+fMfJKIk-ّ[LShMY{u9E7Д0#؍ugi2snKha6l#wW-X͓Im4e)>r0fWm+o>uh_v &ۿYvl`o])-_/5ϭo7 p}x5$Kw̨ñoh/g?g\;ݿ_}_;\o߼l|B8 @ru{H{NM5;_@Vǵ5nAY4:H!9ЪkIҪ`5Q8^"( J'ٍ{9 %ɉu'3=7}qo{Β4qiу;d9Yuݸ磦/_0\0Si9qObk@&pS,@o0xSí$g !Ye/]G=Qzq,hk~?*,,OTV;+h򬖞!pc燢h³#ьn[x r*YSi>*+T(kf`; *=| p;J [0c f0ob@ |6`NJ,o<1#-31}t7us-n)fvK1[햲h"&zov@!5W<^tNF4zC~x`ƅNYVИG$ީEi!f Ah K}zfnO [sIHD[sW sip;Ŵ'VՀhҰFq@WOb`C6sx t8JzXEP"Q N=g6bŇ:|~N BϖPQU+3[9F+>9K*U;E*7]7;Xr!<^, bS"@~9R*Ne9گSۧ'r 8O:ke~}1=I)ԚH韯A Q5Mpx-F˦vUϜT7QS6J0r0ْI # 1wM%gdz{_*ڢE uJ5TB.9 olOzJs!խ9RP_|n* mn_d /JJɮ%f1 }sD7R֪%ȨdC  |{|S'Jtw+cXt_`9FS\xZo\M<.NM-<S|W{`ܹ*y9!THBkd sh;m5ɗęRvJ)qT|>=naX!X좣Y0lL,n`8@*\:@gz;oöaFWkx~jnf#v3و6b_׶._]LO VC ZܿeG0.hBMχx'Qe>M(HwXo<6hŎ(,4w(3G9dFvF,3+1+$;\Bv N2 W/7&*jlν|ا/!$Èkd( 93\=RTXts%kVRf-G95ˡM#W3Ffm}FB؁ߚֵ䭘aIr6 ,}8+-֖Vz185 BdTpH M:z੮-[0U ! 'XռqH '. DzgU(N:A %Blr~8zط0QmʐNK4N Wv.S3 CEsU~zz{^ۃ- v-2=kywM*ܵS3j2A,`>`΃gb NQᵂϕ:yvԺQ[+Lp4\4[;}GiY] X- 1DQ;DYGs.){$DmCEݷdKiT$ D w6iԐg.thr@]ZYLb=Y+!QƗ| ~ |•nA?eZ=/z 1'O?70{ۇهnq\6ݘ|&Zo 7+}RRI3ˏ{1@W-3.R3Y /Xlaĕ$ 0݄EK4jrFђE*kAANibo]h{Yo"ocٷ9>^v#|{ײ׍dDxpg9ӻ /ԦlYn}PTTu\Kr!@DI PKVRdYYD C/)WЎ@f8L%D?ߍ*YJr[V TLXTn| 9!]pU3>Z-x9-1U;J qoc*d [Ѕ3M6AUbi#=e4KL-[]ll p ZНO" B/>kkp``OYQ3Z{?hc[CܒN_^N34B>Hv͛ `?qPԹt<{Wmrw9gBRMp8Ն%Q<ǜz'H5j݉f"2P5cm%{>z1hL2; s,n4NzL@)i5zURh HqѶ>f %y/nzPtE W(DΔ;`,K<Ug 9 !Mf}IYq+[3rQ?M_^? Eٔ=xD;E@Rn]{3#@L"2E86CkAkLBg5lk>xE)g[ 𾟵Z)}Gosm-e;:Eؒœlk<a_f9H9/:l\HYM?"K~;G$?g24i;_zek/0NB9AQap8 Qs98|X1S#q$}L|z<>?ޚ⦇HG͘4oU=HraNp9=O#د:wRQHqA ,zs=Y$:_z*W;3SW*%`DtuJR㈞$iye|pk݌'XqYFs:+Z3wI>o'2RdHbe{LJn ~Gn3r# W^R\{ֹ)X]u!:}uꡓ'%1sR¼tc8|HraNWXdy[vAÓR0| > @mHdְ$Tkǜ+4La^x7,kK-0 VbJVz4z=]7rMsZ'ҺhE ^X͘x<o'8 qp\35qpl8Mm o~ ٝj9@VZz4q0ݔ3?A~jfبgNߴe _[sSIgo?h7?x|S_Ӭ҃@S8Pr)CF+mf f01V5H|Y.I'jeGI9_c|?rayxnQKSD J֒\wqX/_]wX%[q..zw!>A=!:P[͘1L,8^JKdgܟS\Q(>ކrr~!gXվzFLz}{9璺g3e| r>'|Mi 9GW#> ^$yxta3bn)oȉ%:nAKsiE H[t9bXN)AFrpae%pk-|D 6K՚>}d}vr?ӵ&sgbކH_W}.[|O+>>P)~m:G<Rґ8͘#c,ڨt#Ʌ9L+ ̌Ut]9[CCY}?f| 8bx<#H$ٹtm0ei+Ts0}ۤs{T,h!bJblr6Yh[Z)`W,#m}[behzz@r ףOa.$ rrb}ÍE|cR:կl0s^Ku+J\j7=zks߇9Qe2ow gj/ݦ HTW57>{\0q6ܩ) ֒ct&kLԇNɪj SyµTn ! )o 3^x8[vqj‚#3.$ICйkV؂87r60 ;J5{q gxj#&K{Fl.; b 鵙G}Ge*kZcu&db ;EA\*ZYCPOu6 e@Vm&4۴;[Hsn]pcPG,~jʰ쥵Hj/@ُy: OӺ}N;s!rs$idY}@l83f$0gݷyj1&Чpo70GppGfc.gG#bXJΎ~IV9{Go$0'kN=ghC!JB -o(ڋh^ML yUn\!J6<=V[M:ؘKwC٘x`0 5KAumO>cr{noE5Fm"j #7PI~@ↀ$!ȞgLٳjsvr5@4bG/+H>>L9五cZ7_wdeLɒ g_|7NeBņͶLch\Ejk[EL=}_#KN>l\ sl;Trrp%|ܟS6(=9`{BϷtlq./RR ^wZ&axz I,$vחTj 2նO3>ŷȎ#mqp3iI|8}m\fg^8^KΞ7\dNWx_YcAjC]Sl\6%ř%Vމ޵9W6\˼[OkVt1Z%[o$y/GYL(6|w3W)ݕ!ᤋQl<]©qyBEjIl[Kͽ5@hQkgs$8AZG!tBΑ$k5jm` Mo}5f5׫wv,%̯yͫo0?1|uo_:g<Vzƣ}{kmL!V&M⛷gC#%lgVi&rbN:< >-4F><`-@ű { - 5}9T[Dr·1j3b(/^'iH1[($$]Z@)Ӟfn>fm\Ibw@(83gO3pvb.ni`_R@*M4\2SLI/],{%2}Pc6ao;(Ɂgy5k"ERdDPw`(x %dzuQ5{I{` XP"\`de۶eZĵҴR M3n{~2! sCnO^62֒zN`-m kFg6o%uYˣ$t:'tzuPvA\Z}&SH!.q}T+cBQTA#e\4&֦(w4<'[+mҕv P9QAD(:ꮪ-AɠN7)L!/+D;YT.?bqom`ʘbAe}LX,kP> .l)PTj }E쐜ے6Q4 {Yt'G-­Uu}eO:5׻+/掠_㭲TSĚcg{8U/Ο ҞGb $w/7BS䌝XB磦b+w\`C37':׻A߭31SER_ !GPֶH1}$֕oMAS(ZƨLڳ1XVI:t,+}lE _6т̒U j ;/F J|tKl',mg}O4C|gU^b4x'ś9@Se<ٵD v”^l siֿO;Դ6XeѶؠ$5ʶ*R[2=$꛳-kN6VdiT&੢!;[1WUu4WSbX"ǃⶓ<~@%LPMPU,(8MKY7U3.s~u[YP6Pta EUh/ V:qGs_vN]b6XW՚t k [}O8Fb=4ҖHӗw&zJ9t*E4BCdidt||^Ɣnb+u"7khe'[mv_oӿZJyj mHׄPX\J0(Y->Cjk v2%:~hNlWDTI9Y"wwe2a?Wflfȕ.^_ɮ/ XJlU0IH"-RD->S^h̖Mm3"2o;\?ێy/Hmr-uJHuQG@J!^M{k A^tf:vݮC:5ڕ ]KhOiLxk\l-ZoՈ;>T"cCAD&n]-W%a٦w5J#^U)6ec  hn4{>q)g6kx~jGhVK}h~Nx0t#xkG9TL1-fQ.^hk>alb]RiwgCeĈ=Ȏ K7[j ٓOK1V{Jo~0g5֟潙dA@~e)|3QS5>l/R"$fNӟ euW%W SS*jzPSG`Ajb3>JLJ!ysBSej:M:ݻRU"hWCccSR;aL]%ٯ.4/9|F‡@Jr;X_ԯl& s0ЍR;{!sw YC" rY|p~+Nա?(T*w96֢27-ݔWgWۭgwrWmTk7wW3Isj$/DRcs*dYrK,TU,)̥!Uw@Yjfƙ=S\sѻ{3tWf7QY pիCL^ک:PvOa|* `,q&SLnϩ鈝џAi.S,>oݝ;OW̓̂1!ɡW@Ĕ3\jg`.T|Zه>'nxnϩݛ`;I71/(c.SG.l"g{d;럶 DmB U+`ʺJ Zlm1L,iSVrD%Kb k](E\ZBZy$=aDg-紩eC[,Ev\AWKi `R!>,-Gוk-L;WWŪUTj8HT)F/._'W+5PR#)X=E#"6Nʲ6RkL+TCm\ωCBJk+SIB-֐o.nNCO}Al?PZ IȲX(#zQY7Aɮ w*(8\9o(6k$[?El\PlvRPl˫o˫'nERhNbݮ v <(~?_ufm^9)Z`XPYzߘLW<{Sޚv&Ù\X}'v1V[m o_ ,- ^eeTӷf[3wWΣݷNΨ\HըnlFMb:2Xz 65m]E+L [Y1^ZH"eŠkXǎ ԯr1 r QۤN+ᅓJxe ^k3H>VwkB3J -M3΀PN zc&'z.y`@`P7E~]NCSÊ*ڝ62rsH T.}@ K΅~ul̂Q'gᄻԣA4k3rjM :~PF9M^L{<)&Q͵Ub:/AqǪ`|nq9>{oL1N~q`ΪTQ.Jq\u5;f٩4b?a7.3^[avޟ| bva.AC2n߂)1x 搨wH\*ScNf])y`# 2nP-*S=vF@81/=dgjwxԎFB !Sv4fc`Npws\吨:]N[v" !QFRt03#voY}qfvOQh5#L둙94݌x'Kg.1 CA;&g]v+ϣeGڽj2{ s'} 䑌dj1o_*u)pn M,6|Ԇ$k6IEpѤ\CINSL )`^vl.N+/MA+)*sʡU*DTb N&LکPc8y}0ejh=S7BT|JYb4(uv yuکyy?yq9Dv+Lc Vs*BsBNr0rs"SCv}NN,:bcw}Kj.S'd >SץvB) }vOaT*^0ؐX.S!|NqQfFFL27p͹Vn\ <-Oʵto#\ Ȏ\y#U#?$Ht,v7d  Cغop?|V.f/Lx{uBoڳ<Һ)_~Y!|l:snʋsf(-I:❆f }T&{h5nr=uZH@Had^:/=oD5.Gob?02moÄ! n̈́~|uww^-W:'^~Ry}yUI0qvKs n  !4u1ptEF1-:ϽRx̳iKCTK]ޖeNTv5wmmrvv6@S7gfRٳ) &%q254(YD&Eɒ"$(D.7rKBiuScԚ1]Wevyfg,Z*${3euSO]+݄؁2#WLţc(䮎RݛC'ؽ M: 9ɿ;)4 "m# קTkϘa D\1絷}z?횃u=2I!XAXu@:|wD]/>d-UGGŜxXtUb bqynJ )3Gfd&T{Hw2VFPѩC ٝ-C[~(.--tz2jJuJS\PU%[KX \* gP/6 SlMw헨}`"(l ;|a%uEl 7J @So,R $Bz/F*m,K6L7ruF(hBC} XQkNa$kfn.hw-ah~q +a7E!N(ek9{F)1[=4Gt.\GJ]'N6[kJI%*w~]tg`4NkѤQ{o?G$O:,-6x 8/<k\~hTi+E]Ҙ7 ^׌DAOKL8x sD4$p$H,YB01eXGT[꼹--P|7 +8}Җ=6 xd@hPO78QڃCI9!|ɥ9e}r(`D$>.Ƶ.R{|s𭅗yK2[kP>vz%in*M^P>4%>Z}4btO^!X6e$C!RpDjXj,3>Ŭ#lVZ<-uS'/5eZԍ 5c|/Gawjâu'˖UtHZvRhk$h*iW5r(`Q恿]P@i5ya~Qo+%$`j5#8Vv燂sly8W%6JEO)jp 6iK!3N8Q`?a q*/g;SOL:,[,kdp/aPHR@d4 .¯oj徯Lo0_tsH8dN>irϳ?p$͢iB@CeW"QouGdΡ_A;^O;^OK(陸7?_ם2H[^]vgoOtC\v盧tlN;[ߑŤpy{UqJ CngD+8KL1I!EQ gy1; 'O@q2w$Oq[%4 mc~4wփoZC@ `(A8)MY P J| ؁?P=%/qLr,;+dH 9 =&TPf@ۃ1T;\5׾s͵\s]\S4T"{/P.YS!KQèFnR, %Xi mvwDأikViݬ jPhiASjP œ*qC)ƥaE[0O (p\k_pE,vSJĐz!".QfNQ7 8f Y";|m;P+v&Up(WpgViԆiw'./85|/GvBs 5FWÀ͠v(ɋu?K@.(ԡ)}%?Z7.p9_ 1_[3¿}3ZmǎXa)!V+\T$[@8#XJ>5XŻO`ͩ̇Sﻕ\&ٽ1.N !y:{FqA"hIhAZuiAdCڙB׽#%]z<=߰b~?x۽c3 ~H3Ă߼&~6-4_[\D*} b!Ʒ錒~3=>_D|OXTLKBTH*lF"ťhP #Gd"jSKjݜ ]%0LPZeLk Q.ДJ jfYH:Ppק֟<ΔkdABJ{F7* LO%1e!ǸT JOo6i>"wQ m[kb ˢ n~(xՉg~x?G⯟ه'3%lznż$|xma¼طXJ騗}}S46N.hv*l)e1J8 Hhb (WPux"dXSq}q31eH7~e-{`<9N )f8 sP\3I>䋗R\3%v(fv*8E䀀q @X 9LXc.2 ha;׈ḅS _tTɿ;)ʗ4 *Ln Aס4Pj߂#2xŎ+l@O+&qV[Lr+In{F)c : |OIޱz#"ඃ[bzDyOI[twI㞟/NgH'ߋxpx0g~-f 6M8~tL8%78mTt9_8$퀷XN[,2[ZFUv?IV8LntiK&Y-giLW$yȼNˏ5s(?Qc]^Pv/ϙ3z4q>#(R{Q3qgbb_tr6dTZwxߋN*6tTI јDr+L֔:eR]sj3Õ]FpcUx-bxb66oVh"9rsXm((XrsN Di u:d V%wtAn,nW7%$o?fD(ʽA7}CZb;0lvU$nn'$'m'~Xt2f6ENEXvt}^6(r|4#^wϛc'}<0D8 5hoRD zy3r+y1h(C6y'Qq"C}䔊hn6#B;܌ ΚMZw'ipKUᓛvya'F. I"2 Bd-K)DI)osgSOxˉL=O@uPA%RDx$EBbd_\LE=FhVmTbNjT3`G2xillnF(Lh1!\EQ"quJLxM].N8M&8!JЎ4 w n*S_$, @XyT-WX $ga4J J3Hj@TD&GhZMwYY0_| g%:SMo$&>[sq~gBw{;7IPxxe>=b6;ݳ9]1PI$7S}7 !|'igdǷI߹'fnpE5ոHqӓI;Ϝ@JoCa!QW?@BVK% ͢O1mѸӌ%1GY5U1Gy>߻7AEJ<Tsm"B){g=m&/dˋGxPڸ;ō4q⩜wvEa&Td5Io}.oG)uz~zޜ.ٌoϦ{%)e`y|tA Øpˀk9=|VQS5C.j΀o"_0K%y*'6}Rأr}uψM.2_&dџu: RB+i ע0gʧ;njPctMv G&Ȁk'6wfNu3<\Z Nk+ d.?,zt!#@'ǽ m]kE{3ۿ7'^[/hjɻ[Z Uxy-, z0c"sϵݍ{5$MZ(5Nxbq, sJ"B҆%OgGWΎ&MaGZJp_Ȭ$ w$='ljd{{F)/~ϜCJ0f;f#ՆtPJ eH4? pFs-jV.wF"stx!ի:jɾy}0AARlćF.DթCp"m[\Wʃ5J|,:rv6ʀ:,"^AxtuQ!%V9UcIzBIu<|!M@$Ɠ3Q74Bk@cRK!e IF\B3nb=!Fh?=J:-튪q!5r&}BƩ)nxh]<.N|Z7+wtN&2ܫPF1#qBsj}SRr䥖h )1/dacɂ"TD(b>OǮ R`W]L[-.`QJ7jU("Y6-+3Ttˢ ] QdjMĐS42cLj/ x49Qr'C|w, j0gy ]^ʼnOD4ʜ9>!a!vh-t(^n(P*Heȁy"xϐ<+nYF/ߚ *Vr ҆/;R݇"V"6 RVf)w\Ԭ<}(vQ6|| Y+(_)sO3)Q) /7jC=fVmQv!ܬڢhj+_wAYE) 7$ʣtk{ڵ'11%;j 9z*}d~&gC&)e7hpƭviId>`dKKY,poE41 '#D76QV;uL`CKNa$IIt$$P- (w\HV0吐,J?mժ-3n !ǿ%*N5% /[EZeC|Bzpm҂Of믓fXkLwJgMv/X<* .f-$W 45ǦVnA Z|J9jEBV;v%*wOh搃i`%{D5lU}"(oOaȜ2 A?^:U8CnJ!.[r-$4hqTIn] zNA*aFAv-n"PQ/U0|ʈ@+`ƛ[+%=kP땰pO[wf(E}\\%l$*:u˱oִh 㳒'޽bVBhϴSa'APVaC4?GdQ6f'˂/Je%t55+%Ϟ8?=3iLӻx|yu͗6 kh@Z$턭zl[lQכ.]4>|suux;75߽2K$D-ջF]VL]fZX.OWKz>Has.`MBD!FXaD, [kjeh1Iu^*ݫҧ{uNB ( j,*9 93ˁF!d)y DM64H%X mQBEWF%jXD*r'y9Qx i$2&$I W-:DR) pu@R*_^RN-5.%(0^l{ 5l^jкL,'YQ`[#kњyRA;Da&)_\Rjڗ/^rpd|N>⤋su=ztY;j%4r~+sۣwMlߜQf=`B$\ɭ>Y1.!_}8T xu-9Bi PbQe{ܸXGڰUڰ ԆeJ<_}A2^㠛c'`7o}mQ?ӑe`NY j6d\?]*jU-jM!]:R?p Vn!X"_j4p \l`;AqTܣק3Wb@KQX*uÜ︾"$Dҁpg6*!8s LQ 7+0.&nJ" E9 &Iq󄾙?5C?5Ze`0.k4ׅdvzkImQ(Fh#j+EAn|قq0`BbnBq)LS)nȪJ'>0vNԂE]ZڜW%%jTfΕRUQMG F:I$q7>h%1B` )m'{1CPjuFZ.~1]{1Cn=ԍ΅~B bGPv!SLvX v !vL:W$P 5kوgYyp2`4gxSvh\4"?vX;Ci'h*'S@6WXo5LkZѕ I&\ŷA!80Nt).LJku6\Ctfn)% p}cMƝ%MoN݊HvHG'Kz} <# -GןV;k ;9\{;dZwZ]hvDr.z;]aRc7 Ί5(^3\Ca_  ouݺuXZ>~oY֏{xA{& ?P!Cd{;{yvv5EV/J&T:ܹtuOR)I2衩IB?r5st>2.kzkvcZ# /î01Ԋ*̄#!s-`I+QDLwF xapt1>"\[LPc v0c]ESBSNt{T.|yC!Vɨ]q}Մ4Z\ۀ ,:"(*-93b~?4ovGfK, nԑk$I!cӨ )AnK1qYX6-R2eϔm)HNEEljcX6U=tyPqx, x1K<2l =Es!v}_Q+N7ww,1[aeoݚ{65" VTu>@6=Ͱ=kb[s^7vviD/ e@;kԍ4UoU;Az{<.5mOM];іpV/(Ɠ_ 2JP_dαT{JM2f my^ߵ^ʊ8˛!wulr­m& \4,(F6 S3%kCkpKoSӧ5# Ic`~0\ l ]nRzS5>^X{z(q&$ƪJU;S,R D™Az\yZX+jN%9/\M2g!|aI)[̼5JtVg$  x;jf۬| Dk9zD|2pjY#Q-%`{rrmF'=$mgY'w/>3 )3q0z_9'=7>_+ aR|,duU|tv2Y#pQ0jla SюV\,yڬtK"$GqÌ1HuSbKEqBG7m\,Ds" B5|Ysm ϕgy)(wMayRN/̖IR.).`{q'dNjrm2Kgs.3 ^hϻ#h/=Uds~/ }&^Y*SE'yacR6Q,"Ǖ=;z1|;%?'a0$`샱{h8§I4L㧣/$m$L?͛^f]*0ˮ9&21OWR  (XL* ork:2n!?.?v/1I[0$;G_H(l9>><;u^6z2l+ݓI}q}ۯ(`4%]5\?|#ƧO?)UyٻMn~{ 7}ӻL/_//OwͯԠKsIɸ}u|M6tzC7lyih04U?01q:,ܴ3\̳+I鮬nz5}թ4ۮ,ojhK˔b]rs@- UH>\Ln~+&zϧg }OoRB~t 34v(T}yA={/QA?ՇO>\=~BX4%5KߧN.|\lLS*_0_ 怟+IS~7(qO0h7fb8|>z7Ȍϯ`}f@5|g$ҧ7(ؔߛ~?|;x98*􌋧>~zS&r̸U>XYrd3`V~rZ!hUvc-\0 @@H OR/4MA n˸$&g7t:l`)RI8l4Ri{%AQZb-pfVŌ|7`wOJݓ%;n3ƻ. wOV gl%Boh Zcw05Yn?F1+6cxF|%M-5B|2e2[t](|)*ϝCs_ڼMժ3]}+`}OyIJtź=Tj0Ȱ# tgyY<_ 5U9zsjyk3je5,oM=j =V5ҩռ5$}("Aȑ`go~Tc)3^6S2[4z'ﯯkV"wi l9\cdhX_n,.Yzg b - '!\p.\爏>n.`(@ -#u_yp$,B|?dJ ~iPnNZ:mnE5]ߵwG!LDp{OoN[w ZߍEPq+P>#a:fLt?SB*B(a3~ /iu;wJ˖,: ^ (zNv c~dAΓp֌qiN!!nC f;I$5{*`AC(%:HygV w 򰿰;QwTv˧cIfI*Ar j2ngɈ&MAiXk#DK?ɭP F9 $"i42 ڕ$5`eMU746a0c#/a`^Bj"smr0 V`d>FDoȬSC0)AS&6PQ n0LˇW O.(e:XP f+,`"bca~bE`cJF*H@ :bb[ 0Fh9A;eʥBXw\FLA\ƪXxU"3EJLNn)ᑺ4!l`<1h(mNon6L8ˣ{_ՠ8pJ4]BWNI bn{@4qT!xfWJ]zάt)u25,H51$H1!P= & 'Mz-J9c%+3EVR~幒* Q(E1ȥ,tگqdTa h VB0 y`0PL7 E'MޤDrCu8ͯe+hb k` sa+a#ljpF"V^ʈ5 >ģ2ju O޿1mI ƣ!\O~{n+އIT-v^rTME_ J+,:K^`8|\@\}2Hߋ9WJiY=68k߆nBp nxf{Nbn2!W eajA Wj~^n{P%rgE-mgr[ X?9AZJa*ǝ +aMu4#k.Ǥ 1qN`zXNvx0FEӣ&Lՠ *10-o*xjL1T9De1QX: /(1*.)z&sӮ \$JhQD'>F!^RЎ4dwo ePRV\Սmj j_6nFFYnlC }\ԺD282vyn-qdӃ ;JD2D4DmX+g#X)xWPkk IM[4Y"5Mv4 'J~}a6K^cl$v ,_Ւ-P_H}n%ӹm_Nuv[ ;C XxɃ5fo&}-9+&UʖP³G4 {'/xx=ϰ*;l}7obZ B/ϞDDHQG?}h<.gORjWޓUR̽g0WW>upM0gQn>|/`4DrjU~gR%Zob]ؖ6J/jphȴ+͕.WL|X&WWZ9/<;Vx.WŰ׶Z[}><=˭ĞjtT=ŧJj5+ѝ*tMvy5jYQz-qs]R}h-dw!HC` T0jn XD J*k_?u,тg҂? xnM~r﬛{D`̘ixDX[u)5 (X 1,9c7*`Ͻgc~x$gył{DSF[I1-)3 5ntj>+"E*6gY6+,}5U[s+N_RRH*o+z@IP`&b5nx3e)uR r[ p TBPj!^#_gS:a$ɑISIilTT\ 2R w[Ƥ!#En0{)A 'c3FhGiags2j F6S$R jJECkZ + (uÖ&$PS.9p. y1Mk0vXGEuC\^H,Ke7Y͢6@djG̤Ғ'6d׹=nvu6IY}A+"CV2yR`ɏ3]~s"5N۹rr&̵ĔE:Ce>YF&룙<\(M; A :O1@K^O"3ipƳc &N 'RDжꝌ^A8Iς;TkMGn =Pp#ܑ@<ܕ;6JIԐ;ސ"D}S1:romp|ߗ)`!qm{m;1W8BS3Qh˵~d}zj߂7܁F8;X:b@ є`βح[J 7:/[ۭj\a: z@p (= "<{ .[pp}} c:Ր>0 K0_;4Q][Ӷ❠EfTDM!OVj hUSkh?@/,H 1D֐@P`b Bԑ@N*%3>Ѷ 9,B9>_=>fNqN`YghUէ`Vg()HՉ3('JFJ^[H[Np40 3Rae9V$ l5ešCr#y/=yHTp#j]tIZ:#m}oH]biTMSaSA:YISk&M,7tjq|Z{&QpuEޚ{)ޖOj0V'\t;cu5c??c1a0T:lk$ICQ }\MMGS>Ȣ.yDci9A~p4DCH*S*#Ҁccb& V3҂3P 8y у C(| "u-n1?մBk Z*F%)yxj`U`ۃ$,Tjh,0o*a D*65ӄ;O8G%IBgmЧu4Z22u)ӔӽsNSd GɎŝD5UQ-:p^Έf/l ۛ7\\fK|ҋ 2pJ(Wf>,[0luy+1,zIn1m<y}v1J$HL XyZcdGh3 3HhRlKh^`࿟}.rV k TiJ+fO*q\~B$Ye ՙ8+>R8d"S|mo8<'ijU/{ǽ݄Mb5^<ԗOSl\$s'2434K\U2֪dI=BaKs(AnGvc|raBy`GT? J*6+e?5C b5_MXX3f%_LH8zyg6>pzn{r=MJg+,ק7mn *9t_V:hr&x;yJ|V4m6?=|z4TLcmS.$r o䆀HPpY$C]v$C2(5@kєF i 4 x 5ťc7C#ߍV-GoGbr_ w57k#c9?ޚ~5))⟢i9tck,!/7v^ |1)tOkuزPaZ5a.!3KFn'R}Rxw)=KQ;.J\^kIBrRM-mCMz 8?ѭ~ǁ#hh/eSk~x "pBkKUFkB%7kZ6$+2E?0];S`aUyJmwuqeh[Ep- U犷iWj\igbعm9Zʼn8=*dO\-#;s璴x$g;aԢ@Qd]K;Ɯn)gU:9ɣ;kQ,RL ҜG0bm8D/'W:gƻXhuìsb1Si^N#Oj DFT^F3uxf _]vŪ>m8,>'K=u?'r9y^ԇgjLjyH..Kgr&}_֎BcE>#ʳ1B7^чN}+ZjFR>یx^,>~Ň(Yj,I08 .ѹˁL |f2_>G wvxX|Y0h鏿 ~5&6_^qgx&X٥c]VPvkEc 78Fsc$ƲGOcftWݯdSG}1}o~FPΙn`#} ! Љ6@, ֤h!r]JtPm[Յ/Sws2C ?f݂,Mo;}&cY뢨a>A/.U!{_Z8g(9eMPq<{ C;Os8ILDX?RzJ%I9x> -@cEOć:˹UK/idi9I޼.u"W)͕eMݪ Cd=т9/;BJY?GϬ8 B{U,uP%{&QkŻ&?+gY0/@IL1$䋝u]] |Dk%S'p@1 \7V\?~_~]JSRlD*JqƨHSsVk L]JOGS?㧋}:O1``MPZs'2  hD j*k 4$.St$u;yk+ h1@ $  ТG %~ہ*#;jq$@rDU.ij\fQZi9i9L G]*dͨhk),MnHXoZ "av|hBьioOpw=M͑c0=1)ls?.C^p=ۥp=pUw&5 \zIK+|27F 2Xc$ǘu|j*#=,DYA7K?1pK0L˶4М޹X F[Q̧9f1J5xp* )&ab9RRM(8 7YԒrjw1;Gr IF@cfLX)SJ@ L8>s]N48r]1lci2Y;v:OSи[z{|/p{hLQM k7)[, BD'v:ޝ ޴[4ڭ E4J(|İ) 8>vAĮQG0  ћvF4T!!_6)r]EKgU4 :Ð.ۯآ3āۄ]&/9߾pE bL%pc"\&&%/4~XG45hLCʖ;[l)^(_]rIhPy!`|@h/6W")%;.d#w9"*'Rk}` G~@LJϧ|$3.UH]\r۝ZhA;꠩OlSsuzL%D kUk%bq ݇lNi Y۩$_U/Nc m["?(ԳCՉrWSM:/'|.2P@ mUOm.ʣcXrx(~3(I8D.TL)} 1I{ avzHZcг#DbC-9|Vǀ ހ5Ak Fo"L 5nICwlEͺ3GJĘ}9NS{`FȋgJlu<{l'$njf^?Ih(h~]U9^Pu냍%iq]oPVHhOFw"c6].K\VgC ҜfK=^?A "獌d]*~漣kԗ`R\0V\pCLzuU+.>IyHE&}8QtJrQ)E$\ԕ?#o=PJ * 4*"sA)ſ#p}?eq`-vb2kc4!ꃛqꃛ[%d$[٥C6MT(9^KrFNڻByB - " RP Z"DRWUKidKK)IW%B3k^qzېd `C0@ ;) 2UNNbO}q]-j,F׍,3usLtPtFǪ[ѢfWQ!PZ0>Bdyjݝ7O]4?|tLE1reh;k֊|WӮÃu4wܥ縛3۽)L(Iiv%ziI/qG$#lnh? wjŖ[m-BPʉ4BqaeHUFT[ KT2UɣƠ*=fMAQ-Z]ҵ27S{X1Cjm33ӣxਧ#@ܹ=iK{ش<|jv4=!.@2v* bnhTS J1y<=:?U!Z+9v'.<xSƕI3A#VDOkoI5~Kd +wFVge"[Ǖ1Aɲy#U*j3YFy]UUo(Гed~~(6JWF#JQn/j _9$+{YB_V?٫1p>Su Wj}*_?~Wnj*< ǐƇSխT]#J]'PF'Fj[ayH #~QҞHj55dPPQ}Q`&lU*2 0`r~HהI8 $,U)AflPx:pЩ+踶̆^뙍T,qb7ഃiM֗zZHlNԺg&k2y|GMTת= IKRIpcLnӑD:$}sz}gĠ1`d8bF1٣$./7,ۻDЇq-`)/ h57Eɣ(fy+%9 ,%a NœZslk $ňN%W0klQQUCxʔGX%ki r)Blu8LRrFqc_coLyxyu+ӓ<1Ə-5tLxuJ?{$5_էtk1~$C6;?)#gndѾas^;jTN+f/~;=IBlBH%tu#8`qNϖ#[P5p|w|_@$I'0$Z^6VCCTOK/N,Xs^bE5@!e;2 jɐy '`ŵCӦl vQkE&ԊLϾby;9e.Jcd\(&XLQS Jo3  ]pUX,۪iDL;w̖%ʛo(Uʿ?*N?׃ v7#QWÖhWa׎ySZsqyw՗3.ٟօW8̢aϸ-htS7 T䅣qwQS /1ʅ{^" 1X).j|orQO?-q!ruD#FȜ8t+]@'t*-X k]Dw+PNjz< z b_q߾^Srz})^>{s]DjX^4ʺPPww;*FvIۣ)0=liH҉$7Hn / ېoqn6yh%RVP[ռǏ!`pX[I \ 45g걏Q~c盻UqkW8^YKOQqC7rfU*O;gt ˛8l3\1;tf3(sQ0.[vg 1D1@jڤa;֜!6EԆ3FՁw[KnWalN9kT7Y5g 7f7h#mr[v3's*KyL?M1r jLZ.?0䜷 tm];'vlK;DvlKzTǶOk{8c[:dl?o=f+3Cze?MOP>h9(n6,a>G}`?+)'I78)3Wn5!z`?ij7l|!sӿt?xoz}?HK3WޛNk&譐 ft= { !fvye2!b'm~ztL\Q}ZǙo k*.>b}A6̈́FZ0ncNLJl0S]3ߞk}\B:Ky#_3zQqS$f喤rkڬ<:[r#&Kvʭ99Y)&XdfeV)5TϬ,8ERfVn^Y١I2d1sII:CV^-J7_y/F2/'gcZ(5Ve;(@Iu4F@plʁqk(#a+ OJa"Dp[ ȋҒ`; XZ煴;Kb@Y6%0C'Zy#d)2HڏR\%JՆ2Inw>*Xua*ƑQvvzqyk;T)^_},|]aSU%fkMF0QG8eůCd0Rk*qA,pf1SC"eϬCA/@*QA}N)'J88b,Qal)U0(GoJ ES \iQJ1 }3+څ@$Bq h̸+фȈ dQQɭ& rbji?6%MT+_*zWW~CKO0[;UDBy׵֟xW}u5%T;W!U5{=j ׯBo_oXYsiVxR߹rjMs(h3_~u/7#Z?{Wܶ8MgLS;/{HrDIE GQic"[,v]oڏ\rH3oL6gVh|۹~/bgt8X,z6}y{LΰA~LmbmWkLyO7ur5lmh}Uw3c3?rl~<ƆôzM٭I&o+tyT*@(RnYS7q̼e#F\D[˔{!A(^Voǔx8tQ{eXm21l倓Q(AA{P45RVlx&g( XV׻]4v} t{p#-={[?}=2ߥO'_ eP^z%IcY7%QYFRirr!\k:O\;x:5x&^5R hʴL-0BKI 1)m #RqwY)flFy_L`Q#`LRJ4c("R @pA^n1(QRjǀ@ـxŽ k/[Ȓc&b߽}|;VymﶦJj|8ɷvL?3'!0R?2ȵ9=Ƹj6{{z̽PI) P?}ҼC 5JS _$lCA\H+PAp ɡI(CrOQqn*+Prh7pj C^/5M% jY_sbkM}mgxq׾(E3Iv0lV'=9Q*Af8\ UEC(@@c]ӽRaH؆뜟?DaxvB]Rk3Aժl_0١VZ`{2A'yZQPjBB9SQ!SD)jP $WÀ CLҷ[P"qY=)RTZ>/h }iIEq֟~<KFONU "O9R>}qQ(Y;Q (xvdyQX7fqR]rݫ ; BKH@xvbH'`u}YСq#`Źm흏:#ơݯs1.jsO8JiO-jrzqHa:Oqb=ΩqAn'SVJKI//FjΜ\`=$ۼrJulfs__1UOER,t Df K3MDj̷cVJAR%245 E ^6+'qE qYQNXf3.8TKP@Kwi|X7b^ p+24nyrCTO˭o+kYީh=E \RsoWBI(л/3 ,'WCYta P4'p >|=tt6_هh$)g/6/ӓ'9tG՗WD]8$x}G♌3H8k/NJpU34hMZŐsշ&D8JgF 1eTp 4n'=27piS2 k2 ..٪Z&IG qũ_9 4"u=Z>24LI$J*L2bi:+2}la3n6R: 2Xqr, e& ԤR#RXB9NAk(9'KIvw"aYRH.G?^qr9S_Y y4ɠ q5ĤV!i:뗻JhD.JpPíȬFqBC5sl+e~z L(5_vړ!WRYyMd3ΗJg=3b*w%>-r j:(+xw";zpWKvcfİ3nOZLwYhIw ༏5_lO14c{WfbTB/xZ,(8Zo76_|poIGndHZhFhK;;x %Wb#QJŵA0+ $tKe5iS2?|d b O1?n"sxGhAjK͋?]}/wYN'UO#o +)؂w{$ٱ_4U-o/4!߻~Od0}E:bG\S&0̄F:dSH3~x!75#-" &q:Tđp ߀Mg'\[{nk"OVvȠP_ܸ'V;n\i7pӵyL33`& z)K(;k*)9GQwKrɫaJxK^X@H411ײk1vߝ`V`y fҔ TAk Ph$d\oa0Sf*C,y6Y^!f|-s. Ѱ k94cA98XFk1seALj19l3T9FVqi# JY #Jݚ eif0H GIۚ,%ro^qZ3XT_1p:QײΚg1W3^1\?-st:MV-BzOҾ3ŭHhL1P6nxYc@<ꋎM! ;1uz3nʃ:(MvCF-=e E4@ W5 %[oY0G&y2O'߽Ϲ8Ӧ!|tz9z.BuE%ayt<j:9do)9Xn9Qz_+&!g=ՠU`'0P YO*t-?dRS~ErbMzNshmúGMQ]喆ݫ r <(5xF,1p@<=/~1z9Ew" N3T?%ީ 1pN.`IƁH0*c4d;^ fk@VA; #-}³#W6%IN/Qh0:A9NƎ3-8tC/|Jya%})G[Ϋ+*f}Uk1lxI,sMjU*d ksrޝ,zDjmƯicqRoo6NNraA^cJ~up0s=u9\fαht<9R0`G,φuql㳷fp[|Pq~$x*Z@%i1g'8|0YlP&aVC"zx5vv~%[ܵ$qWncuvVq! ۬>_<6J|RD9Rrg#!j_\v]0( }wckSQ3YFfZr 04NK[,HL)C1VQ|z_5_}.|W/x2wRa[3:G0dGm97Rdd qp6aHǛd*cZώ9apHy.Ҕ k a7,8i{VDK`* yZI諒g{`2N*,}+w4Dev!v i>&},S1w@&AWIwy|EwM bZ/LK7v8Bޞ׺ 6)3ZI'׊pE3YO%YZՓkAhB)[N~R;%E"%EZʁDbajENQ1pU\o*L/nobv^SO1oPݕw9,6}APuVw1dÚ,'_sZ.qo/fMކxy՘[ NdkF&}wwGӇcW>>LWA95pe!-&Nms)nƗ$z˕f%11ݻ;cb*2ddR;阌DB(UsBh'Yb`S=sp05O.5ֵ7󐊏wyLgJ)1|ٽ1Z| T63><6dfs6]fɇb YYg ؈l7,yF5uy"\Yܟ,B1j((CE r):Cc>Qݎ1 ;$)e+w|Cuޮʂ|QO-F}cȪ\H=$+-VrhL˟աɚ_%| ;z`Wf~vΐ^Nde?s:9l 7Z6q: F: u(y|)|˘ ϞiϞ^\.!=SIަ8^Zq[u ;isWa-)-)'uJX]ݹO)NcۖxPݝAfV1s+}}HFNG% i"QNTw=4,Z;׷ۧчD91:o~c6_O 1ض$-ι-b\ AvzZ20G>"Fd[e?zl=wsh2` qinYail qt *seFIk2AizY̑Qg"X|R?|hV{ R(5ĽZwaj\ "{h.%|@O0L]aQ}?<, |WQ0,Ug34*v -.3{=_,;MP,9 ejΪejp,R$L ȉ)gJ%^K5q %+0Gj^x>?b2gNuh`#_嘣;"$O`` IM'CC~qnźJ ۇ[ksX'(om@u^o(ll{16KF$v mQm2*]0e9M%H ̔vHw^j]p.׊DvV*.нHHrκi0kY=Nѕ5+[+7+%55yvpe !?@XW3zZ>J4a/ g\{@VqgZ1蒚mb-_%2R\bbݾŐI,dgas󂀚 U{QQh fEMNA-#cr4 ޢM3pzqQW!)GӍ:ug yY10ܒ -[1'x񅖑D d\|N@J mV`%K+sc[(-_xl~P G!êf+|w2>`{8H-*x=v Q0qi_"> M 6'MVuAvVWo{[oX% ̗JaZցST8FF35)\kRc/q8[r?R\tٰB**2SoOZWjp}ܺ\ʈ6W^xϸ7,Rf 6MxƗ{ "$˽U Xɜ0bi5̽p<9AF JZ#Y5[Gʃ:FvO45G$LВlh7Nwȃ()qZ},L{F{ C4 Snr DDA'qZڭ]\tR n C4 SU  +Q&6#-ֆv X8-H'BK>0΍54v_Jd4~r[b iL؃^i! e>4X#tc\a .+mzy}z~ajYtwV:)ݒ nM Ch$PobGYp%9-pTyϥE,-IHy)g% /l3QgG3ʟfɁb^'tNGȦ QV&:0K?uΖsޯP p]i}(^uQl'$Ti`M|JN_|?\(X7L ADddw |\WN-,7Ssa ct ׇ n[)`<<ϑR# rj#\Lp8bG'{%k~x"HXփ4,Jٝ#议*A:҃9 כnC“pЃgrrz\XLΧeވ"ʻ}iPS(I"JCDR㊞m#M.bɥ:Gܟ=v(M.Rr8 "aQRq 0tNFRY+ba:s" ϒI"{I6Zݨ;4jS*4 fׯVID1 [DE"irpDIcֻըHI%WoՌi39=Js [)0SU*-RJIT9jzJJ/aDK&wh]1hEѪmT'W+ $H\s_\xàB>zn;I]]Ƽ>AA73ܕmxe,Ve L!sw=rO){WƑd v1QXC=Y@)"L0UOV$`5aeV˪L2xO85bb[Gγ`ei6?OΫ^HY&IWӴ$=2a2'Bc |n␝ʼ,\d!n$ LŮ1+֚H` &iͤ3N.[~3|q`-׫BoYAt͓Wiɫ!n=Wi! 1t@ֶB5f+%iD IIZy+8Yr$3`xpQ!a4*f ! pխ005Rﵣ4EcFx?LSt0_~g۔.h"gD !R*کL`0kR _}xð zTn|3,sl4?e`']Y0U뛳+(C4@ u@Ah%X#FA[E-c j3E HҧWc6Z$*B(.CS|/C.& nHK!OI> .=jzSD=֩O[(#H ϜEC|+2#b'̹UlBUyj~HQ#UP GQqx.~{\Tю>N[P ^R.Ih.W h$I8}Jxa-_R}@v7* `; Azu߫ ?i)=vN"c\Ѓllنz8BXj C7x#l+"] ik] $zٸ ݩ[ G|w)[}$}<qOLvǨNuN_lhOGTSi}eCi}\}rivV%)G w2jnR 8z WTk`SKBXP"" aB w !;e)B9}ȨsoЁ1h2 ia!B :F4Bpp&Y10? c<sA:sA博a#3 @mL@2Pc@Obv!r"SZfGyo2;*cxiXF ǣeqC[l"8[ Id$`\bn9CȂELTVQH:o棞j Gn?6dXJ Kۓ)ѵ,<Α.ِmhCD\Y+%9}Tɂž; Zys|ܵ'e<,ql#llҊ`YRG^AΆ.hS6rUOF#kYس̚G9z4\>}݇7g#4zpM9"5HkaJM>O1aq#;T/L/ތgλ|2z) vøY}M}Lj61.z;OUҩ˻~Ug&.0"^`抸Qp/v]7yʩ34,NR$CAz nJ6ٓZf 18 ?JJfo6Jy5Iҽ%6aAg[lX,`(q-C[$@IݫELV+ɓ]% d?[6OU<fڽ/c|vJ?3k*Mg] :}I'de0`HCUhkI^C y Lo\*Ka}!7 BԞ[`~0>Тfo7m@c6./"'(C,='C<\5j| vUwHe S_wI {aӵη,WR\|mW;~kM-<[|,]]@9O>><JRRK="w9o]O8g5M_L?;G@]ǘ~铜+Zzf-#ɻpQӜ /UZ)Gb\(-C:D7:Nq X͵&bQL9VeDx-I̟Ae ^{}3+? PGǠhVp|[]z.d藻+>3BK o%-/%z'«u!PwB(V]i.at8KkG6c* @UK[tdaanw. Z*Qo)lu| ="-$t?Q/;9^k3.|/_4ߌooL9'g:_8>Q>q"ÕP6 n}hHw?AfM'0J3@k~1$~vS \7vv&W34D*%lA v Ta$2R8%02gb ?_O#r E|;?Ass=DD%M~d Vkjie:)"wksuHf m?? >hź郴jZ~ 4-0;@~zM]-PVBE*T[-:T @9iV,/#HdCk &GG5^`&\(?e(;7o!ҳgBPT2QCy,jG +Vly"4Q\J+δ38QGhZ)wo"+DIC2k-L#,$k9[ s\ P)@G)qD=r&v &`hčgKXLHvaLW]hls TӠ}> XhKX7s$P9a-o 7spbVjR=_KnG`Q)SBSl$R3E:JPկs;ʪ~f.V[uB#!]^YzO.E'Z D"^<齄k=zҪ9Ht&| aEɬq>^d)erTȖ Y.t~ X>DB(6Gϵ%Vʨ4 w4 p1F'RN"Z$)$\&Z.Ĭ%!)ݞ ?) _RRrqXR/ 7oS䣿|MҾGE_UQn؇nG:o/0 )J/$Zv#ERt`"0*URnڹ&JkHT܊TfٛG;5af %1ygH;[xyh)+l>;>ZT{#ot`|Ջ3lM]M}TEs&׫hbPfzIo},ڒ>K f,B ER!(c9*(o+VgmnH"R~Wlv]ɹ$ ,IH9z@H$+"SDkz{{Ơųǣ!#Ģzړ.8ѽPJgP š7h]mF Vu^V1. d1Y]IHwDiFTPQLAc4T, (!ibT`40G Y< t4f7<cQ+i+cOi=)Q_"0qX?BpKĎ^u\/F^s疴}$Rmb RIXz⩡!I84D+JCk!(đP3оP'GP!RSr0NT@ '!.PM6P#F͕G 驗R\JF3ˀ,2JuTJh']CkA5+@`8 מ& I,E:4Մa#N@{Nx!*th>t(%%Fzb}2'AeO6XO^=|MAgnX:M: >dɫ7aӫumKrTf%Tʎ:| f(&X!'b]>d0"'Έ~HOzXز6/\DcxC-Qݺ`:Fv<1TiΈ/\D;ɔO@1,([G$:hݎgO n]Y !_vFXc@h ڭ+ vcnNjG6QCkΈ/\D;p1·M5Ώ:`'8F+vui㬨[WDwn=H.dJ _"<~uWpM]f/ǟ7?N_]^+6o/fܝ~wCS}(N;.Y|(WME`}~ ˥ Zq=d}:0%dE&4 "LLR )mHW$"򚗡m9> wy0 ߕ}ӂ!ab̠3ըބz| tACCB`0Hnd %8 u,>,"l> H,@Ź$'whDq93|43B{;$! t`f@Wh@W\7 m0,Ѥ֒ƶGFSɏn Dhhkc١6 Dh jb8mmX1c.3DaA'v5̂K$!v;h1v&g՛ R' IK,vGc4(d>d}PC' 8˷ܑ!r&4Z>8nt?QmSzIҡ׻6 a޵b(w#Z84C".g8a40:]FTJ-Z'z2*S%2pIīylY7tYd[;~ު+S12t)=vƸV38+W ܃p.CCb,`oȁChZizI.nzq @3GGj,1q0Ab60DX5l|B=S F;NNf$ʀ\)ivŷswg)tJ[̝)yV.٠gV<[!H~1mKi?c;Ff*VFc<i% Ccay(zbm'0f|. C4yf؃=@X[)Ξ7lZvI,S2Lʩ ~B' 1)x&m4 :v$\UVVBqB_vMynF(D*XG)59mX@#$e"7.`#\P/gH9QzH>9ю?o94L2I$r%\SmTI%A\5B!Cblj9l;-:q5:L?p"ЦTGZcKb|y\ |_}E}½K-V!NjH KDL R*A24!.F%}|K{'T"fg~@ML(,fc.h7X_[{P3NjkK%5ml#hV3*t~ES/s$ ,,t ?/\:@^|^IlZQp:iHNtjOp%NpC:$g6Ԁh6f!<(>#O'SZWv3#>j0c§xpyP&%Wx ~kz/ǥ_}ȋKp@n'g'emH.(-{y p\GLKm$[0XaH 7%kfX폄f4Nv wYȕ`yzT#zb/zcpVl`TƋݍL]9mnM1K\afF h$AkgiEX%N -sb)zN^\ez-DVϳk;7++;٦0,뵵E\JI/10n7 [g;,v@Js.pJg.tȎijsmc)[U׷ͅi]bMЂ75Ӛ *2[V`SSw <%R#q'.T,7Y3+pUHvPcmR?{Wܸ O!Y=8{n/k@-^"A7 Q8 zі*+Z7R=ze*;F_p']MB);b}Kk|. W$7h7@ߏҼp3xc`GӴ>9 45 8I `r]L#NY+a>d)>7GZV<@Ls.ΐ>[+^KMGa͖cI[)Sb:Bpq}R&nsX۟#{URv) 'X _`!)vcdYgbd/ul~aICٽ&S>7nu<[GPA=k͘m2vn QCGy6d@xk+EٚBT_wQRNXT%>(8tQ27 @+ 펷wf2Ljm%i,KN5u8~l"9gl|t"a]a3G謦l]=O-לZ &x:dAL=/ɂOH~LkWju_FJ0($`|#Ҙ@bZ(d&vζՙ,Qqb:;bx*Jr`^n=o+҄.ѻhim^vENr F&x;ro 1{3db4}+S+u_RpK|(kG=a7QB0*a $H44w2?k&ȅĴ [K0ۤˏWsNQp__ ·c߼-'PmKlЏfibB1ηGR&xueW5=宫Ql#]%'Js][-oOL/GkGqA]DZR.ގ^B ˗&W0 pt7PO`YqlZlt}0gte_Itޣd(YߞLa;xfΔ>CdÅB҃r>{ c m C*漆NLtͲ![Js.ouk W*fS,ǶleOcfǥ%pKͱ],cs8ۺjM kDXa2Z~ V Z@l<4&ugM-\Ŏʞz9;%f\_D\O${XC\3!L/l[DQsZ׿+"tNVƂIhQ'y*)hhIrgSӝ{! IIkmȡ6Q>ʗ gvWD[dVUu޴udkBe]m]BOM-Zy^ckځ\fCAoδmA17EddrLC@p. eԸv0wwmGMMge )'[n!/u+'J$-gyuCPꃗ ~pᤛAݯe'+ν92/b偫/ Ku{qz5o_ kXag i)K53Wa}91gF^qɮ=^pԚ,eJnգЙmoz>I)3^ۚP+<Ćឌn~|:ρ' TrY%ݲZwVB y I ]tEV<46{R$BA~[O(,(C]=Kk*wJ9mbO* JۣXpu$O|$ T9jW=ݦ l4(n=$$Ӵ8wDQlŴ5*QՍS4Ps(ߒj؊`#MrHRy&QQV &'QVk~_LdqIpt@yar %֨|=^pRݍiyXiGUS&_( p .kj :ǜggs8` KxWqv5.t8rZ+2zO`T8fqs2 wSG-&ǕW2#mE'{ 4&0_)Ws[ ?EbU7[#|DOAI2ǔ/T`3?='9_#'2`GjG7y?w"Ut ֓~ޭc)%:[8%FCB9O2bv\u|#7{= DJ|݀JefuFd˻' {f(ˋvkJﴚif~[O)ZBj_q?]SF]mJjQG[[ >K֓y.U+~8ɉg:% %x9ɕ EY1ǔu(dgj4&"ۇ7q(NAV}F$89biovOLX2 k2.#ϩVm  x6_7sU Ugo7H tҝKy:ZsLzދXsfdO^!7Z"#s]^?f@X0L) 8<>p k$)SږU LtOsJAUXQAx99\qUұ=6k eu#zG*0K_泞J@JQZhMcLiy. ąB>A!drwd(˰zIzqz\ Մ)`1mAM*ַZ3ALb}WZSs.7* LPpk\n2TUxa̓zDbWgƄ X!3Yg[kҰ}y?-KOO=/my3MP\o1W##?^(9 3j1|=X7!kXFnK/-rcM2-HrȜ[ؤ`JKȬRIwq?|&/\qS򙅭`RA@<H\U)FaA5 P莢7v*PW*CE`"AD*Ȟ=} 0kpcP4ee`fNw L#}'<-AyBՙ4O (U7.tTVMy t"d 9lTͼ hCkK%PʜRS96"\˞le}[S |שʷGdz؟.XWE)|zt(_5 sx_N=zzIw{9[ѭlr_Ņ$^!!ŞI)l#h s sf[g4$EKVK XEqdqe^Q=^KR[,޴0 W ̢jl?ΜIqw x``v8Ʋ U12vZĉ75o=/b@8K=>Vc.JSǽ d-Labp%R{z~*aWmƏn{g5C?âѾ+A|hTl}3 Dw7no%aao>$[X`̙QOf[lOlf /q?GD{~Ás#~"S*Ή8 |L8U^k1՘>Fh3+c1"o4IHN/كOi9wwŏ&@gʪCe?d,]Eɗfy+`쇓Uz8P4z:޿ӡ܏];oh&@b qxүtc; 3x?m}ox{0 xd7ûҟ hO08/XK;r6QxFLdjBa}`g2웻݂3/ڎ-g\,ۍug40X- l~}'}t-nhL10Xu%y'%k4[}z3L9 n9| 06fqjs!c0 MDfD\>w/Vo٠a2z ꌰ#h(qic5j4 pjfswVۇ}qbr(( 5I"qa(3T_#6$|wA%C dGZLfJw8W#L^˳u|Wx8C{\f_UyyE~ Z^^=vʝf0<}sf/W}ejȑ_aeSK/dKmU.{gEfL:׻`H `Drʶ(p<it5Z+@^|T:"?f R[6J_f+WUj%fv3&%W󆏾>N=Pi+~u~5JGݮN7 r_Ew sC{c՗D%b.u:9RDŎ>ޮ%R.FsTŴj/.vT3TAЖBw1#).ߌt!)r8"M[h2h Q$X4LEk'#͠tqLHkqÄ\\;`>p.ރF) %dCxk 7{ B|^Q (Lc@/"ks>ǻW8 _ ӹGڮ3OV1WMX7aބU{l4Zi*t̰Xk1W@jfP:%,$sLBjdRlޗ\^f2*ɀ,ܚeo~iiQ9X|'G&5eԟo _2\զMݎ˧WkIj׷WoުX 3V ƅ0;r"%a0R* c"uWR3Ohp[ÃJB -p;tqR)!pwM]ݒnӒn+ڽF FnkKv=Pj~ kiW]H%oC8h*\Q z*odJr#ypv/'dS5n&.\z ~=׃3^(s&Y|m) v描m%+OT`ꖺ[=ugy Q+{ …HiS bʨpOX˼Ic~r%cpY(,kdᯇ`ZxCm8A.$x9{#%vO^w4@P.%·zVx逸aBvaF' D1=D҈(ƴ76gY!ÆH MaQ41Mkf h>&q)1H`hP$`~se7uRP] VP0^8 'ި f-k@a-慱VPA,D:Y%mxa=_Hn(9CC"@ŴQioBi(idg^0}^ы:C:D@(ܫX[LSƛqP.,"BKڟQ J+bbZܫ^M[bQ*2_]h66Zz")$ hABP0z N).ㄥx>h`'re:dg4 f3~8ߗf͖5OÇ2m=WJ^cYZY4Saҟ~q.c6ϧԹ UkD+Ȇ.RgOn{T}km%\qLdq,hb>]` "#"?,㷓 Ǫ c(/<\e\wi  t/qCPDH~@HMڮ0k!R-? /Y?It4K/Y3JïLYKV#/}%diEXeg0%޽|`!O?|>,SF8ogrBJϾ0r]a>5S/Y˴GAP2 4Hd99g~3"'Z{3.*Ku8$Gi{!q9yR) ý2xp:l),dBH[74q/ޮͼo:[#qz1lbŘՋqwnz1lbLŌ^X//]/μvbyz<_dՋI^QVV/&|^LK׋JH zMUH YɼD9͔kk T %^U;Iďo=^LՋ*vzqءz1szzqvzKxzqfK,.~PJdQ3d c޾MIJ'%?$='%ѝu}p[ _qca_/wFF&rMa7w\' ~_vr獸n| #Y"i;)$/!]9]]󋫝yE wHMamrvЁU$P}qw'ȱ),zs'8n ;(RĞ^a SI C>n:;jV!>rnIVρOg_c^ﴔ7̚ 7v- 80Dts@Uғ{';/Z{2 i 1"#&DiH1FeJ)υ{@9ԠݣGM!7pۄQcYR;C* eGɻ߬~iNJ_ʾ|]P_{ye{8{UW->OC ?X~~v)0@Xk0&ޙ!xre5xKd4(6 ֳ{ A[y[=z8Wtgaʒ{ShcqװGFlk{&zxA/N򩽆57 k)=ۆq>l+-ߍn}I^BЏ5ȼ=AݏJk;]YU> 9^Yj b^Z6Wڀw^L3V.dhZ-zv!f]5WܓfѲY͂ Xڵ31qB֜xhҖ|tF b_*[5Ĥ=7x{r|gZe<ΧSJ_>wzMYOURS1~ 4f֦P򰰳e^R¶fz?Foj[&1a[h+Yn9uQ6~Wݮs?@~}SI]Ԥs֢&{I!qh9;ƐljUC@wa5:;KH6]I֝ӹ{ѹFaF9:5zc,c$!GO.,uITsܨKB?j>rŜ5" @#urG-ǮGǏZrIk{2j%OoT8y=ؐa>p(DV瑶WV^Lgt7g`\albF5|z=hqehHBT~raBˏo7u?V֩]Zs= ?n͒n*.W[Bgښ۸_ai}6#~qrlƮ8qʅcI9ޭ2)̐ڡ(\n5*+kGB6)S `ݚb#:Mź1Tc}0HֆuS7hݚb#:Mź1ݵukvnGNkovmO4ֺ!!\DT-Tk֭=~̊DSES~iFW !\DɔPOua$rM1}bݎW/8Շ*4hukCB6)ok׭OwkATPa;ݎVW8Un'wkAB6)?>jS]{koWԾǫcS tnS0f{FMJϠ^6Ȕݛ0j^;Ðj+Sme`FNuxUMP=[5MSVjSo)E' 0 4ՂZv5{c1: V5A]&Es1 1ܿ3% s1 )$SUMޑz3NcnY'3 ),r տ3#o1s@_qR9[NxrL r)ܪ&H3Gs1 ! ),r̼gG J9cnUg֎c cN9V5A[}34R9B߿8c~9fA_vT2SUML/,$)ǜr̭jc%:Ic~9fq1>Js1 %8SUM 1KS9%{xI+; Kݎt0 | ZI'8Hi.ʑDW_,pů2\i9Z}qx1/r9ɦW, dr}t?KxY\;f(c,Lf Bjǵ4{Aku<|M@V2>Ҳ?^cM Ccgbz;]Ot39"m(Aّ֢/P{͆҇cxz2w AjܗnQ<\le,8GVkS&Vv|i^3Su> v2`*E rv9Qp}|.f^˽C&v5耓{᷌`GśJYI鸲,;>s"Wk-'Eۡm~hʔ!u%}TFuk&5;B2,|xYk;gFƻ6k~#3na;f>lk|-~mB[qw+-]{G1ayB>:4g\u"s՝@q;0 =QX?:`͎cC4S@lGQZ +Qd Shxj~PPH=QX;cks;b0Q<S".aw (N]c@q+Ω腹;(ե춤ԥt ncwaxsaPՎ4D{혛 TppjŦsQK;@b4)xIgHb1rH?@]On:)h8Y[| Kk+;B h$^h݅$2 CejH:#LᎨH;; HGvecq}HɶB߶7ہ;I%%]noït6u++OQySQ$h:#ߤ DiGq(E--[lHvGڑ|ܲ+y&'m-盧~:%m[/hLϿM]%aɨm9a` BwYK2ގpVwJ&G]O Ao;'IlNRD*^`vщדfI2xBSF8|; Mh7tiၔ5 *&'ЎOg3(ug4ؕ4i{N=ʟ["3 3+f8OugC"y|`nB?V\o#Ucu7b4?Ms3)ɱf2͓Oni}L'\ƑqO+3x̔(pG0>ppφพ‡ӻ^.Nj|>)?i G. G3։l!AO9a\Emo[ _`BHDsNr'y.0%yU#O9yg(1εn! (24e\YMD&HNjрwY -ES)EsFA,1a5+gTz{454GJ_SO<@E@ؙ `Z. ŨPf4WN~)'LҽtaWXqPR $SbZ[/EXAaE{is̮ɏPF )2d3qRX\r*ؖ2pRy cpk԰tH BF9# ܽŸgҙf3Cvg求. ?SkI6xaG/f=S\:T閳OcpӢ4B|ʫۃbyc7*Ƿ7͜jhQχ/̫&^xG`\ِ "2Dz|pG|qm4-l?Yo|q35`?/'~঱60_\|`x ڔ!!a'Gs0OxWBodJ $VKc/@ӟYat煺 /n`bq\%_#XRAVvbތ l80b<=D ~VΌ/4QET'.n7(_Chҍl|Z|=k2ΛxWM/̫ 5|oQsKA8 (Әo2嚛0a)k␧:nlz?r L\33<5\A?#**pg֕Q.8pnŜQ)R1+P30pN<FfmԬ+o\ QMo9Vת`y4{Z+Vl}|e*", [!uB=vVk{閇ɘ_^|\wn1>P'PT5.;@=&'vBmya$x{uo%][=D31M EH@ 3Ȓ{Tasl Μ7VbM^È{caFN'ՕW.f\hf')"00%V1\У'{\$qGWkOvpdO +Nڒ** ahCषf\k1MGg^XEBOGE5mWr[)(s_`%Vi FYs8ЛR)Ia"P 0ıjiQ7a۩,+a\Y3 pwbV\?PRZ{:nUXַT=њ-?c7u0vS&۠@*Ow9[|͊Qy# Z71]A3=_R.Kj~ 9AT~m?L=0:Z}o['8S~l. o 8@5>Ϝ]sG5 ׸/8}׻>/Gx`W3^7U|Ն5y>8-FoQiQֿ\>Y|8k$FTA]N?b}pS@hh vFZr=B"eqpViR`QȒtEA85*$g2IL͔tΖ.Ĝ [MPe^Fa" ~y4"ʃ0O.-:p݇|t>v-G›)VcSrhD$eIRX/l|d1GHߧ1:{>Y2eWo%S|8 z"#΅A۾5C' I}=>'1SMԏY LX++bA99Ԣ0ȵ* +22(jfi!5?~bGڼMbD!16BZ0kgaE{SjԴ"gLAJPcJ)][o#7+_gHX$I$lr,{2"en˒-v4 ؖUj~EDU>M.yU`cM@u:G7p,J%ޝ5Pོ=w_]]T?,؇?,2Mh9S;^^l#gkG] Q=jぢ~wy/(B$P+uY>I@ZGyr=|;% ȨUr 7.!VY[> { Hq')XiS|:ROdZv=s@}lVe:3&3Xp5g gdlg]XPqJ& p5NX&L w0g)`+$2P]ڜXD9xoWFHڝ&x9JAx ^)K,@ׁ*t5$\>dTV%I I@{EsmY &.*%&9g,#;:N{ &eH 6 [U:HmQ_/ iM~5K["S[ | 蝄$=ȥ٘*=3\0‡,yup0FWRT T(I"\n/䊢`=FyG#5*km<1Cnu76XxO]_hMF`JcwKoӣp4zjA-Hŷw@ա% Ŭi GtxU٬J/8}'^>_r:^ڿ/u#>l,RMݗ\m%Yx .F3V*!98b""1&ŮkD9 $&%6d|VؠY%BD6ޛG!jJm畻b{$^~d7qT 觵@E ȭQj$f(ʭ>& u\mh(3'4h+'vfa8|go5 ,S_Cj7yo6c7xvoce,ɥ˫{:9,SAzNV*O1Q. e_$׍ MG?YMaď u]wdͲ${R?З1ĐCJ 9kƐ$iR1!(\:qY 1p>*' N~.P;结[e|=ڸ)}b4Z]yrKZwyZls,Mv۲SJrIT0̱0$1\._ƲF -kh mѲ?+\7 NX^[v`K>\DfK`(]h()!FҜV g6լjWeW))#ž-*!d ʠ1$X0e!1Jf^WVWcl՘[KB*B/w;E5YVPjWX^\pv"`;8@PQ5zi ~j-U8cî$ fx%[?= O'vU-;%{wzѡ&i~B3S>ۏApsd zH߽I51H* &c{ #=#S_#5,獍}YlG^-v@PG RQop;) 4 iTò2#þS;~oT;vU+]CJ 9+1Đf i08Q(IZHe1dpkH4ٻd#}ؚ]U;Pvhy)}cmIǠ\H.BΧ{is ҡ:A `q+h 7kbgoV`FK3e45=zl͕PH?l*P/"V[nD T!k'wD,>֜x}7sȿ~ⴷ JN2MʗMd ߜn".íBrn%wT&MmWO| boҾFm/'V8Arϭ&[\d5kQh3bĨ+_-Ũ~XmFכBm/xq>'% "&octI=*'3x)cM5ዓi}(2=?YyU6 (1ff9K\VIM{ wjf@V[B 09V hmph%s 'ڑ'Hm'i5x6m97b =n߷b?UTV L !5Bf sDџV.yljeP)ZUp+߆nKP +0;>;Ie} [&7{L<hi`mZ殱MٯUb4y%cq;Wry cW!.>v9Sȓޙ&YH Ɇ1|uL>V< 5*i9D*t[9FyUKuPX2U'DCU(Ekch14J:N .Tw(Sp!*+<5P>)kJ m]_*I yސ#C9puuڟ?Ok) zzzS|sXph-dVkvSS$ 7MষYv3m;n4#!>{)i6ΎU'#F_1ޜBs]m r#Ǡ֏A01Vժܹf -vjΘoNn Fy1t d>')Mֈ9ASL\Rr[C[5&["-R3fjA^`-#JϮP5_nP~J \s 9~JaxWvX/S]xyj`NՇ7] U;)*fLIOA}wE=ÆVO}ztk}DW*PoNݤQt0բ*[GpZ0(z!}_@F!ƫF8o[ލjZʳXG}8wKyno~&<4]'?^xe߭ΗY6px[7+ݒ7)yw1v(n6] |*'$.bʡoȌ>ӏh>?9q|"}qko~|{m{+GĿ8M5ZxRBB0$Bbh okYis_us~+׿tTC岵N5FQqnU}JJ-yꔁ+ӧGM1 Dd`M0с6qal^Fp ØnTp RHoGGVj5mh ?#;G,bVs EkIB# E<*bEPnцKfZUbj5SRڔ^rVp lYѳbgMmr 4Ұ̍ZB"XQq0,rsAٲ\ځXG JNq[emyd!Hǐo *dG\ >DNK :Wև}Fk"dQ,hP|,#uf¡ՕXjD K2>drJOo|OkZGY0.H07ǘ6JrZ$8jrSh]bpt̋ے2B:DQuòz&IshiՕ2( g{:2~V|\l~]J5˾ixw$?G.@罜20mۇri|dgx7db Ri$ӓb^_RA7?PʀRh8SU" \\-KzTk㊖r!nxP?r g f$Pʼp)vwTӚ{VB h>5֯1Zr3rB2|Q+mڼ\c6M4Oe!cImCNin1,ihНk PZ.6ƃ}XkonV_E?z1;i椱'Nt<$j:%˔D9㲓% [,vzAͅ$QKՊx  B+Q*"`Hx `4;f)D0 "R(7,9d0s-"r(wՃ'Yz@"z8M+C)A+YDPIs%o7X2 2r(Ma>sŋ VmBdƔc@M)hɥfEIR# FT)c.HĸDNRm m2ʮs ~2>m>P!`*#Ō؝H$ TPX\N2DSwAT6wm8]VYfUnjnNy F*%幇)9P c0]G1 IB i1FlF+NZ9tC܎p?b )/>C.%.vd0ȕ*}ԭr2l6R12(*Po4Waa~["h@j]͊CW]Ywe-iJe -RrF8ﰂD($דR&ȭQ 0mڈ\%& b VD89i\ ~';sf)A]LƉ&$vʳȿ<1#*p耐7Vgj8S ZsfڟdFRJE)M3z7Ȼ%Yl_&0yzQ``XxUG3pwtvFI&1^cQ<\7R9kD$CUm祫H7>u_e9L;I? xڟm7 -`j Opl!K|+Z6 Ƨ !OJ$@3PKj.? X4[kf=  {DpSswV|H}NɮZt3O5 ǟGiքSl>Vk謤~ ?.)VGlTOR"(-CiOvp̓GV; ߹^_}[OMͽhu ;6dFkOCpч2g\ii` _+LL=jĕ ƞ|\<'a\QXkV(H]J &0XQ(jj1P5O:MSUDf]cBt#,6T+AcjqT)mB/F8n(.+lw'mhA4*H\id$Jpt@u6 Pcu+O U}z_{ny촡 ƨ]-Ӱ5҆jPPAȶqbCYR$O^m $LȪ:-`+vw.`ĕvWyd9;\.Y^=@Ub b4u~8>҇92a)P~vq{3רB7˯^p=P_$I䅓 'i\TNcT̠B.ûR 6 G>UZF+2Y$ )(o;]o^d:?옻$ҝ :ɍNvl͒6;+jsv釥Y *c{5DyF*O\ ;1"*ej;C[ Wz//Wa<F W~KB~ VWy^+ RQ}(XjTw&}\* JǨ}@-4,-5>(6qEP`yP_bk4V,ȼfKSl6HSҫ38Ež7a[ E5ty^;BUYvPxD7$(p}ZIldҥ1]Us9I^ y~7/B14<IHqy{S>N=EQY֨EvɋH ? Cih1IBOI<* P@"n}oKvfrNƌ*քOyHF_ȟJaI瓙˿קachZb! x&_Pz[ԧ觴hri{W.r%EJ I7Cjސ"`̔5-$ E=者0%LGf'ss~h%Iq5nFl/nk~ _8")A|Y u}(D&A#EFi`+{kنJm/|<8Iˇb%dHe]Ν(׿Ӊw] m9#c`66݌'7 vG^e;WU~_:;\KBxo[~8Ms;$?.y &^ .hh[;WFrݳy!a҃rRC;]vڪ7'5O`$pE'O__ՋeJp^=cEuF7t 9I~z7^qXt5Sq<ɑrwݣ>O,u؟魙\:Gvn߯>[3á7$w4G;?Gv7֋:5|V4wP_H}MN>_hpӛMaR*;u]5Oč'_&0'lnmyS ۄof}=YވГl{庲ZU#}8] U -k>D!15Cùvf}bTf pV#$f'N*HJJ‚t빏 RzD2E?s9xw.y>@񯇯SSWu? l]/P̿sB7yƃ齊ތTkP2? ߽nЅ>׽ߏ^r6a9O^G_N'x޽Q+JGBx< ;VG/!ŏG󲽓̉9 'k:$+MR}y zn s#3yt4MH?zx>p|>->8aI}Zx_#㺸cBkܟή'iYՐ :&>{sQ.b=`G"IH;On$a:UDJO)u~}ħ?g1s6{'~ǁ,x;pFPA[ ֔+ow)2F9w""s;>fɸ%R] ˲(;-G|D- GrNWL0#aAiWP"EIJ $tZX@! I-x%}&6L'{Mr[XlDXX NO߫7uR 0&Q(n\sauF KNMuaܰdN8.u 7`69im{dEe *U \E#y9_2!#LbLq1N%g]zT΁4I46i|_ԥ-ኢaMU!1"c]fXҝG0x,Z|YN[.} M${˩?Xr~ts sK}/}}J@OPN+. oX` _`@H)#_E_I!Qkc[01@ |2T2r2Pu@~<]!*J2 I18򁕈1VJ072$8 J I]*qO('̪3 ^ڰsi0W`.j߈B*`l&U{)}0AuעyQDB ~n.EnrYi *iP뙖]moH+n/8``Lof`P$ekG&nwR&)3@-Sؙ-r/g'!*yvy2c=HhTy ^1~lxܶՙzs$W~I *WK'a U!zIoZW HS쀘}{PThJ &y*Vl=)8\֐#D=ǿ!U/O KXq= ΅wu(1Z-:I UL$U/,:<:ҩֻ"+c7mj\+`Bl^`7`uY&r2a|NPA{o-oɽ$Z{Lo-⃭h1k3cxOhD |Z_l% s1#:Dit0(A(X#7"Jmk-FڳeKak٭CDz)fk3|)(ŕmS`:ɌoYi<gӯΞiIZύS)lL _~涞 XJםKURT;/CtϯQOG=~N l8Bs5K }ԗ!e B4"z^eQ7``z>VM:hEz.RYWcz˄ B4jq؀B#'X$'v6e_PRi}¨UJm>8/x`r޸;W2߸=R޻7Խw)vz5Og@koR,';E^l眏1$t{ /;%O$gImԻ(FtoevUF9j,IEsf:5C3ue6WaGF|Ľ,m&~G{iN.@ʕ2<%FIיz+Z(xwo_ٿ-~#˥L/AW̘[I@k@#ªbcKT:~9}) {"!"q90cv\Y.3Y{`PКP׭vywn4 J/pa RF)pN)ou~7߿yxp2 F*ƹ(:ʻSB]W5N횻Ɯ`5QHkү-GƬ`+:ԑDA# gٍ9–v8"ؠ*n]yȰBTLV.4QuS:kzb׷עЬ&* ֕an^"mJ-cZ(M5A-V2|?PEAݼ G-f5خHGtsqMыZ̬*IՖ:(Ap|7}2vW/ g/zV}+'yl&p*= * $ck،(;`*fŦTSoU^-aLp,`lD˯',Y(eYch)zA$}@l߿} r36D^x}aZ{Z:9^6ӆN|Q6z/W'QLH$}Ƥ9*:e3Ct [sݨjq33Ο.3$G֑#RLze/ˊ1Y@냲\3('uog@bD<&Cq7t e(S3\pyw|v͢cg{6tD]x^hF+3 )NAD )ZטJ) VF&Zm6UצNE B[`'e7TWeBzj/7f >n94ʜ ͣD3fPjCcx㼶Xy7@о6Чv "bV606rSihQHnDI@nzjs|8*mmęc2еji4 ]o /"0-I MbdI}Yc$,$ 3w?$G<+`fX X_a?D(a2P?|B1>>\L'R<'gX` @?D7aHPIx!n]Z)J_^j-g5?Y)rjpHhҶK_rWrKհr~\LYՌl͗<߲U=-__K0F@Ӕo^wݨE[GiaM4ӃC.`B߯u7kr )mU#7~afa^kAyX7_գo6 %IǗFtA7fb2v&%ꝷ>CѷԐh*M*R^$5$X EfIGVE`iznW8 nUNVRM0"l|lmPMuߗAw_oa86 R{g@V>Eu9y# #zSLSwwQݍCp.bQ 0>Z/[k>5OV z9`UN÷ssƪYmzU9 Cr& ٪($3W[^5p&yh՘WMcmw1Q%`}=HE_=vuwèf,#@j쉾 yg(&Ow$p}AiPs\| _&%齖D^m r?&Z̝l⤜N: Re6Remy܅~'QHAfVZ.M_oiWHzpyԠHi&L)b17ف-y51͈3M]d׫% NLh$=TҶ)2PΓgf)zPD=Tl߿}}G _r@ݢ {Ly0ƽQi]Ll[/iC'N+Fv^I+I߽19xq~B(XT._TP+MZwǝ)++S/=?Qge߇ʑ s'Oj [BgJHQ4u//!]R\>(z6!z#'zXC?Zcz9 ^rcљ :I󪄐1̐Q ^zx@)!h2S!Q:"!( <@9rk~ Q. *"BxY[sf 1S9ȸd7D Xʙ?P-57cl% C:|| _VxLA!o^TTوCeg L$V@m8A;#Bag#e2/3^ZK&{T'פ:C,r*Rj4^l?yS#:S:5-T`F7^peRnbgø9h*ecuEdo3LfBw^#f՛H}:Jf@}Ald)=R)J|/C0!*rbQ5=fEDz.R(N%ruG "zVCaNәʢ#'X$r *1`8{](F۫ε ^7ؽ\7.Ε̽n7mυԥ t9uo]ݻ{SbKAtެYyt?Ǻ*d6~n's>®vit'+8xF7e_K'8֨zcyy~ +Wa.UN\MW2h0>)|m ~##HYfQ`TJDyTLX+װ<|nq縔&2(b57"_=U[WsaZR3❖x_Iͭj!<"^0}B'7HA lWUf6Q?{W6俊?y`;& $nÓE*c,X荲V?)wZ|FgRN`!0 @+-䯌WO2Co2: jXR]+B5д8m=T#:+ؐ)IN׏&FRHT@I̦^\EMr>1~Dobn=yM#_ё#Ԕq]Gf籀YY(r![=ܟPߒSMf|)LXCbUbW7jj{W)G_45IutP8P+I,=(Q}+a,~H.F&`m*=vb”svO@;H;0|R&^yz~jF|jLf;YMICC'Y+v^ *&J$ya4Wր;=?/55qVUC@Su;q+"sHxU*>ȞXzmқ{q%g]I턪ٽ4[AG%.>r|u}0q RU'ڃ@갣Ǝ;J9s9SP 9NjZWI1A_OTA1 )Eg\g٧g6~76zcِڌog)K=7fUQ* T&a9}ьH˜&9bў^UcϖS/h]k3|Ͱeihe^Z׃ 'l0ed>@DPDr*ǒdžd& cr7%O$j{>Ӄl"S5RZOqbMJE^X6{y;})4}wMN%|vc2ˋ][V12j[M:d șe:Dr Sck>`+nE{g3%8g)Ge e=Yِj{%-hnFd+W@i0 v[}'W&beURhct+[tZ@~+yŌn8;LPunj%9SgםAFeލ-d1e rs ?5gv[UXs-摘\M#Lx1s<[ &!;p7?eBj5=rQ(|,t/?]a3)x%ʆ#/ZaaC(hbV 4o't41. T=n4n *j8 |wBa0>9l(BRI 8ϺCnòCn٠\s}<\Ѫ719@#W'e'[׎v2CtɕM+ѲϏ]?jf9c> 0ĭPN (Ee(%޵IIwu0{jg6ECUNƆZ̓ KiO[ׅ8M s JYx9iAb|"B>՛UX),̧*eޣrLJKYUhI,i,R=-#{ST[(9$ܯJABfiUZT^X:K8E,d+,5 UkKF풾Qo..t/zDPOs\=Z@aqyw v<(Ele.UE}}G BtpL>k;X[VDt@i"ⅇķat& ,L_=|Xi|LM>_9,BOxSF>WĢl1/Du*:dJV~r*]? LҊE-<<\+h ,2%.{i@ņc­aRGq6bGMIҮ=1~K"䥜3[ֱQa^NX??Pr Y>A"p|Uմ0SDHxʐB:V/NJ<=9 GEąւņɰy5ተ#Q6]>{Z; GI?{Pl$|8{Yn.V7ׯtOn< #u&J{I4&#٥n}]wȿ;"0Y|8=G0Z}A2CH=ݸMGߍG݀^@={9{T;X5&x3;{l So7W[R RϚc[rn8fs*$o]4Q<`6::*o>-6\UՙXR& 'ϣ򙝾xTSOq)ZJ6n3n'FI,W;WKR2z,}RyR?!}Cq2wH/jq5r7d۰XB/bv/QV[]<&&֥- %n ]|TWq/gX٩)r2zo!])F?mxLnOc=%-vۭT[||aQP*'^45sb]N4Yv2uԷYh_r ڝ*NV^l*WVDF f&Pզc_n(tMN*,wH]Eц)k:hZ7/JgZ2-i/CS5k@J/{{7U譈RP#!?.SAtG1_-Z;LOv4B0> GԘ̦wV.tS 0T:ze윾Ӎ^W9D`6%pG0s+z+g|T[ G%`9g׷p&fkNvZeRVgkQ^2N=:X%I9 { @SyirtqjFgZbf%B`YqAbRjtCt{I{DB9坵4,sR)^l%߮t:.S[A<$+Ci$[Xۊ~d6yP< bqbM+]©gX@hR;yvsB>ui:}bzncIpO?ȐkkRɃ 1$!iY7 2fa9ln`$q 53u/ÎldI(G.q8 |@[®dGħ(O(`TZLwI'OFK4I<^<P.ao{:=6B]q0ziqO鹮]b& #|dR1﹦/AA4}/E&}?YEj*Fk ''HDf̖[U/xۨ7oX+ %Dw>^P6  r~5|XP((zprN prUpIGr|;Tr,_y@2 [&fiK=CŔsrbL5S;r4L=N9|_ &1w!t]Q0P{  )lWlzj:  z7\iZKG)/oU\zPum7ʟcy)bl2f!(0Ĉ%.f.bE,e"]IJXv.b4b˪.bE,Ep7 LU"F,M`~t;h]k(v 6QgEѼHyw^΋y:/REHٻ6r$avW.lg&c݁f':ȖW33_[Zd~XVwbH:"VHHeJ!u6K!V"8EάD0U$V@N{Se (V!ż:j$8:kp -|yueZkV uFUruns4BUGT3sMP~$bJ(ii™bX2@{s1Q +6sA"!Eً+hm2)ӟ؃%,*TsٲB)<)i q,r&{bV>}Wsꦸ[MGG=TZsͩ_*@[k'JH%㙤kaUwH=zD1Šk Z%,b*D"cSN2J1ET`ix(F ^; p#G7C/{~C{"!&bcPbmñCĬ&+Gթ8\ģhn[8ߧ7pY'I#,$S,LجScIkX jb(N`LTF2\B=ipw{R'm׌"O78‰`0B4H%Ũel)q)b$qJ"ʵ0!0*9 Ds/oqI9lL .%j_`W.޾[>%o?%'_P'?F4g:+]\|A0 Pz>h9 ߖdDJW]r N`tEolf ?u\^N)A0x0*,~}y r88e K~qZcp l=S!AUéĄɠ09?rm(/ц2vՊƅW#^<]-QZ7@ #+]x (-Mݶi-  0OwG3m̎vpkidoԙY'nTnYnkgZ"[w0[w>]^tt/ol `2x'!6 cVB0[Crњ;u;"ws;ƹbV*[ 9RA滋ןG-~Oi݌mIpڪ›v1-*j Q`Ѝ1}kqS{Dq@vkf(y` C#wlMcxQ~Νe! q}Ah<¬nƂ%|"J:ܾIQHw'ZBظr,ilYUCRWy)Z`/5pRk)u % ] kޟ {r`Rk|<ݔ.SxDLNXu2%W<'W],Bcml9O]}G.k= b5I6m/9l&}*M,䙛hMiwТw tBQƻ}[xwB8x7N5Fr[{ kncp y&B%vN?` wn-\¹5__&ӫ/n|qcLk2//_d̹Q,C[Zqcg8cg%i.NUbCU*Ɣ9+Jb@%gE_?jc(f:©MyY4}ޑTob|g[G Wֲs6(yY8\=+'Or٧l6|yeԄ-*Q*?3H)eO#4ݖZ6O)PATUCZmJ)YV䬨qf`f&NoCXB"&؈ !OGZ(8&F(v(b;Vc>~sj0"嫗ZvzQ=d9_cL]/xV%R2_GwrMQV}4b+$6 eB?≶PRpՉoJpԵX {ئ3iq&2 Q/3l5a0@ǣz JvGqi"ϝE;f#/ T@,WJJn`Z% "aV)R& (}xgռ{Z0߃x[1aH[:J{{r79x 22۟&65IV8ێ?[|.& ch@4FJ?U+YņskRIM"Q&ADH6%ҖE"&`NwaC jU[.7;]nI4~.r$d`d@’=~(U"MclAJFRT!%\je'%0KkNyϚlw4! qV.[ /U\ٜ$rdœj֙]5rݘI#ͺ+rq.Ј"~\!Ŋ;9}8Gadny&P6&L101Dk]Oy8ಬ|GyS^y9(`}O/̘ 17j,YTSy"{ǞrƲ ~u7\~#w߰A\Ǔ hF2W3&0t?L%GnqRr܇w>'7gw+|￷y M˽ݼ]ُu ڸ_]{77\EÛq}YsoxEuj|Ƨ!iW6F_-1ˤ,yCߟ3#n,ִmiyz$enë~';^ 9Oώ]o~V2Ƒ|ys 3?~q38<zu?hىӊ x\K*.c-[<_w ԙ&wG0o&Qmj6p9g  7DO7̓!y6}Mr?};ݕ$o$f>:|?/46ۋ_S3x;NgZ#XGo'̊@cܗz5Ua8%~wg/Ăڢ\ xAqvy2-l[& Ż-x|s-~ZY\0uv'/Jm""VW\xy|Ub_[u~'mV(s{KDUMr* UOMgH7VO|T;V T->\Yi_\ox@bge/(W{.s2P X6NeܓD ڥRBϜ b*+BzZ'!E%r2;3+N^K/7L- ?xX2ڂɠ`a߳xan\ŨK b$iG\Y p]y͊FѨH) X~^S#BI۫PoRAS26J tJ+u$UV @xV\@g`Hy+5T#U`,~p&R3$eRRy_j%B7M ՝`7es^SFUcᣳ'tM4ȦHrǻQ^8آw tBQƻN9j̻nUX37 *=jyǻ1T[nN;x6gHјw ?!ݪgnSTTBXʦSSXOn~ )"JBmJ-KmW ;JpoВ?[ Io@5O]]BV-b8žn U\{?QI9*.Elx#Er}i7E%;=e[*~REH+D}DD=z)+6Rc2JXmbKDQ"qX˖;*b[ȰL0'm3ل+ \4M0!\6c_7a -JVOڰ{э:T?`|#,i<.4,.b6@sFGfW~/fpQIȥa;8V NU8ZCjAusCMi v1*5,Q$86h ˻2DgAPrZFSZ w閄@'[bd9 OêwJbS+=)ʂx9Qre)K-e;{K-ty 9ݔZAr"5 H{toR$|BԾI $wu2/&w"l-:p:-T28ė y&dSZt>n1 P |L'e[xnݪgn -.!@׍=mZR A%?Yg\vgҶJ Q)S+<&i9)RHsa#(e4" X&6!m9)p$ ;&uf5j?QGh.(uFw~}Qt Ws~Ǖ83OwX $ E~L^=m1;PuFw\ L=>+c2qts ).tײib__%Am$Nr݇YLcN= H%H([,HGJd;(ڿdb(6SFQ,VTaG` lclZ/m=\{dﳯ4]3Z吃Qpt[e֙ *OK}rB څtF׃7f9}P27up=/n}@πohf7yowrSr 3p٧r7puKcvs o?Rm&~ R5'AY"۲_Dћw kpeݽGs1nٛPӤҍܤt'W{_aw_w<>}jr ^dƯaͯ^ %ï3H=_2;>>'XSQ\]ݧ#bbM(y<|1<l󥭼t0<-6R%'h>_񴲁V{cݬ,pb]n>#Dd9؍]+W = x|TX 1e*L+T1I^x Y}_=ՈlWepMkؚ%Jdžduk Qau]PhA$ ڻvGzujm&k@1ƷYX]&? O)P*3DdG0S:Woc%irȋ*UfdFd #r1#"5J򷷍HqS%PJ$vnYk(Ģ2-mY[Fe[Fe[FvK/ [rrvZR[[kud\H:;\jѯ̑PADȘ6ଅ,pH2jY mB08*G\Vf9 #H[xҗp,s7兾Nu[Nh!q oN>s8UdF*bfCx-V]G2T_flPs`h4DA4DA4DA4DA4D٠TI8Wl7Uo<0%/3Zr)ڴ)78 ;L1sST> ؕkм#4 ;BaUU/5u GJZRߍ\KgƝj:pfr%1PtڴW=,M[$*9n`~Kvb \kwqWZX(kχյvk!\XQSZ[P4M4BBSNxR4թQ9-ҜSL0X 5i8=Z׃ڈYFvP|7WYSjkIH,MRiM %G?s3}g~wy{n# @SpHܤ,0#X<r.qlK~Q6V.Ha%^Mض``HgfB>Zv K&[cyg{pfNmz i[p1FOAtHD=ϿzRiU4EɄ|#)BVPf̨bNRReJ)p6)f*N9+ β43%e@mH6qA /s,c~:@Ќ"/Lg6j,:55TB pp-)'d)Js%ovZ40UrzѸ}1< !w@+OgMS [eu+ 7opno "SB,pslh߹^Oq4䶑V[=8l 1Jmc]VV~pWw.8QD SrZNrD-͝ ]nV;>_s=oy}ZQrm<>Mfhf[㄀c;')tʎO~z'=S@e直ԞΧո5u98ݟuNpg!g\VSS\>4{Y.=9g u_Y8ǽӁk A)ЪPV2S rbߎ>d|5!V$o{g핎;3l4YZQk7(-jJm p>9?wϓph D%T$ 3bLnyՁ}W;92]/߇=;\$yoЪ|h*S5L;@ ɻq.3 -&I"T (%fK\]?p. "OAXʋ>I4`p3  ƊzJ1 .p+Y!z ӡ`:\*̀~ɧr%"ĀɬIR)JTK ]u|cgZeB,'"V$X -4H0R8=ymI+^ϠQt69Lh2XxrS +\aDp!cKEvODv@4o*CaoԽJ TQsHxɐH)K^QהclBPBI T EUJ;+1e8F*1|(TnRܓİlOJ ^Ҙ7R% ?͛~-!d TsaUH%޸+ʌdXBZbv3FS=kaTbcyaTZ n`Aj{$< T4Qhc-jccjQd:ʆD+.F(ʘH[D(WLdSFԥyj\DW5 #%*i1u.B(_OxPl%,*˧/:#CM4RL,$oO 3z>7a<*u{vB@Dr#9?2P\7WlBL`*g?b$P6ܘl= 5"X=]LzzwsJ|aXz@,4X<]OS H^~c>z9uք4_"#+ hE#s5E}n"w,߅k̷i;YXJEj?q !rLoU+֒FDLpϗ;3g 3MGuC=]yϻC$}-CX!&&!8Ĥ:fkX 1Uc ?0ʔcQJC[#a ϶V%.@u<>j LW6'D`H*>o (J ~m|B]$-xg=P2PɬB/e(DPu7Y>-'POw~`ϣf) =pxWn\7`|A\_<ـ[B vͭ6FU?w.VGTipTg\n\\` :;ϽEÝQ\yWOǹaRp,NCtc%.IƆT`TKﴻM>HQ70)qa˹6hz4l;{ EKƬ0l4nk'}kK*דy  T^=<֨x5Ic /WgB`~"?_.g˓{mw7!Z{p Xzz _ `waqwGMg*lܷ^Lf@8Uo9&|+$䍋h]OEO[º5A4}Gv xެ[RHֆqm"SGO[CD9Ɔn9b4wy(=ﺓD!l֮l砾|50"a}nwqٹV0A}u[0u-DP01  X+ykw!4ᕬ$tqQ]{W ݬ4.֌gYh@~n¿fXhd%߳gMKȏnm6H.O~60m>@(TDʒRgf(loʼn@4P:&ΖZpB~ ͒yVRplD[?ʳq4h1ܪ$A{ ??( =T0 i) ^hQTh`ƨ #!W󇋟jۈI$d$U򅹝_:uO"]t'<vJ6QDf41a7,,f ύ+ļ!/hEjHZE[+bNcgczc8>'=EJKª]xk"_#~&,I[W y#~P%^ aSH2zG{q)vOԜCx8lũa9#2>W8_oCt~{vH d-_q|Bt6gȇ"ثMn])ᘳ_n~\3g 0xpcEgcC OC*8"dp7LJ U#fe}= T6r@ы.[( ZDz,BmNYaR oH0Ti l/fNZ9 sya|q|ekQeTzx/ZrvQg3AcY:P(c˄$LrؚD,M5NIvO[S.;Ll5U0D;%{IH}I}֟8?_CRCo+4cL󱟝LU8fIiCx >j(V Ѫ9OLs/bH>1^DBg3N[4}qmxM [Q aE^ DzU۩a}$a]vq }NXBJZ3v;\ j;5$!S)J@pIgI^|t B#uAYǝnA" 2AZQSO?oqW]W=2I6XxLtg;sT㸴Y LD;#3m؇`DIa%NZ,i]1EjMʏSiqĐ**-NQNp*Z .lKHBRQ .2غol]tm1vU͢.vmC #yԖ Kc twN(>W WՔCt5~2`Gø2=|li5#ړX^v =5A;^a_Q֖SM$V)gB5.^jd {yI볢?8[rBKXJ;RۨsSuu BS*S{u1=A]x|atsK6j@ujQ)8zzFx|ҥq>n̡5u DXJ̡{E8i7Z;(s<wѭ}ʂ,d̡bSϛCEMPP`dL]$A+@c5\e;yVy#,tf̡6b&;&DMe{c"Cs<sNd)ICȁKm#SJ6:'a;\W<@*qm:\&V:`Oگ+:l.5W#7-Q+h;UcyA,/(c>Ñ6IY*'TpP6Tyɱ)Tn (ضiwHCv*ّ.qLj*t8Dt{G;BB,At`;['i+ICȎ KD?¥vcЁx!bo(ֽDjSv汍M47p nyA | Cs|C}#9NN4ٖo[A|&ՈE3xϛ:y2lSȡ:4FЦ;|sbѪg,Z֘y E;1FwZ;MZ:YxZ7Npr%7'-B{S8ڽ" %1nOaz.+i ׉hA-7[~ }hf8]Red浲[Т[g3\hc Ywt~A2 4cPʲ&v"x+HJY;z /O2r5)¹7MfƊC*G-;Lizr?OVnoz+B>$dpRyvLo,I;U3Yr;0 W> wm%\ |=N -GshU/?eUSAG7\<`ރc3t-YX!ϥ\*ћ1 ?9m0OhAdNh*[h)DK_GC??οјzN2<·X4X}R 0AC?o!Yx" a"27]E 9B" .uZyu4υ6k=>,2}UTuS+{^txF{ݰtt$4Ĉ̗A9 dxYκضÞ/[EG5ֲM6mg۶m5u:0O0BR)8ΤƨB@Á^9:7var/home/core/zuul-output/logs/kubelet.log0000644000000000000000004010721215146177053017703 0ustar rootrootFeb 21 00:06:36 crc systemd[1]: Starting Kubernetes Kubelet... Feb 21 00:06:36 crc restorecon[4702]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 21 00:06:36 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 00:06:37 crc restorecon[4702]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 00:06:37 crc restorecon[4702]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 21 00:06:38 crc kubenswrapper[4730]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 21 00:06:38 crc kubenswrapper[4730]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 21 00:06:38 crc kubenswrapper[4730]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 21 00:06:38 crc kubenswrapper[4730]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 21 00:06:38 crc kubenswrapper[4730]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 21 00:06:38 crc kubenswrapper[4730]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.410181 4730 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415675 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415744 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415754 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415781 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415791 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415799 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415807 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415817 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415826 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415835 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415842 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415850 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415861 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415872 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415882 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415892 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415900 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415909 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415918 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415926 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415934 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415968 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415980 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415990 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.415998 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416008 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416015 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416024 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416032 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416039 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416047 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416055 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416064 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416074 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416082 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416090 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416098 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416106 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416114 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416122 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416131 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416141 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416149 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416159 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416170 4730 feature_gate.go:330] unrecognized feature gate: Example Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416178 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416188 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416196 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416204 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416212 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416220 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416228 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416235 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416243 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416253 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416263 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416271 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416279 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416287 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416296 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416304 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416311 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416322 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416331 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416339 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416348 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416356 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416364 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416372 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416380 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.416389 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416534 4730 flags.go:64] FLAG: --address="0.0.0.0" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416551 4730 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416565 4730 flags.go:64] FLAG: --anonymous-auth="true" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416577 4730 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416590 4730 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416599 4730 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416612 4730 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416623 4730 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416632 4730 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416643 4730 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416653 4730 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416663 4730 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416672 4730 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416681 4730 flags.go:64] FLAG: --cgroup-root="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416690 4730 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416699 4730 flags.go:64] FLAG: --client-ca-file="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416707 4730 flags.go:64] FLAG: --cloud-config="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416716 4730 flags.go:64] FLAG: --cloud-provider="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416725 4730 flags.go:64] FLAG: --cluster-dns="[]" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416735 4730 flags.go:64] FLAG: --cluster-domain="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416745 4730 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416754 4730 flags.go:64] FLAG: --config-dir="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416763 4730 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416773 4730 flags.go:64] FLAG: --container-log-max-files="5" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416784 4730 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416793 4730 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416803 4730 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416812 4730 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416821 4730 flags.go:64] FLAG: --contention-profiling="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416830 4730 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416839 4730 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416848 4730 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416858 4730 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416869 4730 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416880 4730 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416889 4730 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416898 4730 flags.go:64] FLAG: --enable-load-reader="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416907 4730 flags.go:64] FLAG: --enable-server="true" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416916 4730 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416928 4730 flags.go:64] FLAG: --event-burst="100" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416937 4730 flags.go:64] FLAG: --event-qps="50" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416972 4730 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416982 4730 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.416991 4730 flags.go:64] FLAG: --eviction-hard="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417002 4730 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417010 4730 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417021 4730 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417030 4730 flags.go:64] FLAG: --eviction-soft="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417039 4730 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417048 4730 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417057 4730 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417066 4730 flags.go:64] FLAG: --experimental-mounter-path="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417075 4730 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417084 4730 flags.go:64] FLAG: --fail-swap-on="true" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417093 4730 flags.go:64] FLAG: --feature-gates="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417104 4730 flags.go:64] FLAG: --file-check-frequency="20s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417114 4730 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417123 4730 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417132 4730 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417141 4730 flags.go:64] FLAG: --healthz-port="10248" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417151 4730 flags.go:64] FLAG: --help="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417160 4730 flags.go:64] FLAG: --hostname-override="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417169 4730 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417178 4730 flags.go:64] FLAG: --http-check-frequency="20s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417190 4730 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417199 4730 flags.go:64] FLAG: --image-credential-provider-config="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417208 4730 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417217 4730 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417226 4730 flags.go:64] FLAG: --image-service-endpoint="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417234 4730 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417244 4730 flags.go:64] FLAG: --kube-api-burst="100" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417253 4730 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417263 4730 flags.go:64] FLAG: --kube-api-qps="50" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417272 4730 flags.go:64] FLAG: --kube-reserved="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417281 4730 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417304 4730 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417313 4730 flags.go:64] FLAG: --kubelet-cgroups="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417322 4730 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417331 4730 flags.go:64] FLAG: --lock-file="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417340 4730 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417349 4730 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417358 4730 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417371 4730 flags.go:64] FLAG: --log-json-split-stream="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417381 4730 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417391 4730 flags.go:64] FLAG: --log-text-split-stream="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417403 4730 flags.go:64] FLAG: --logging-format="text" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417413 4730 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417423 4730 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417433 4730 flags.go:64] FLAG: --manifest-url="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417442 4730 flags.go:64] FLAG: --manifest-url-header="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417454 4730 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417463 4730 flags.go:64] FLAG: --max-open-files="1000000" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417474 4730 flags.go:64] FLAG: --max-pods="110" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417483 4730 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417493 4730 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417502 4730 flags.go:64] FLAG: --memory-manager-policy="None" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417511 4730 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417520 4730 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417530 4730 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417539 4730 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417559 4730 flags.go:64] FLAG: --node-status-max-images="50" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417568 4730 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417577 4730 flags.go:64] FLAG: --oom-score-adj="-999" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417587 4730 flags.go:64] FLAG: --pod-cidr="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417596 4730 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417608 4730 flags.go:64] FLAG: --pod-manifest-path="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417619 4730 flags.go:64] FLAG: --pod-max-pids="-1" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417628 4730 flags.go:64] FLAG: --pods-per-core="0" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417637 4730 flags.go:64] FLAG: --port="10250" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417647 4730 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417656 4730 flags.go:64] FLAG: --provider-id="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417664 4730 flags.go:64] FLAG: --qos-reserved="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417674 4730 flags.go:64] FLAG: --read-only-port="10255" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417709 4730 flags.go:64] FLAG: --register-node="true" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417718 4730 flags.go:64] FLAG: --register-schedulable="true" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417727 4730 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417742 4730 flags.go:64] FLAG: --registry-burst="10" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417750 4730 flags.go:64] FLAG: --registry-qps="5" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417760 4730 flags.go:64] FLAG: --reserved-cpus="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417768 4730 flags.go:64] FLAG: --reserved-memory="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417780 4730 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417789 4730 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417799 4730 flags.go:64] FLAG: --rotate-certificates="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417808 4730 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417817 4730 flags.go:64] FLAG: --runonce="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417826 4730 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417835 4730 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417844 4730 flags.go:64] FLAG: --seccomp-default="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417853 4730 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417862 4730 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417872 4730 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417881 4730 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417891 4730 flags.go:64] FLAG: --storage-driver-password="root" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417900 4730 flags.go:64] FLAG: --storage-driver-secure="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417909 4730 flags.go:64] FLAG: --storage-driver-table="stats" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417918 4730 flags.go:64] FLAG: --storage-driver-user="root" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417927 4730 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417936 4730 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417970 4730 flags.go:64] FLAG: --system-cgroups="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417980 4730 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.417994 4730 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.418003 4730 flags.go:64] FLAG: --tls-cert-file="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.418014 4730 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.418024 4730 flags.go:64] FLAG: --tls-min-version="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.418041 4730 flags.go:64] FLAG: --tls-private-key-file="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.418050 4730 flags.go:64] FLAG: --topology-manager-policy="none" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.418060 4730 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.418072 4730 flags.go:64] FLAG: --topology-manager-scope="container" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.418083 4730 flags.go:64] FLAG: --v="2" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.418097 4730 flags.go:64] FLAG: --version="false" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.418112 4730 flags.go:64] FLAG: --vmodule="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.418134 4730 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.418147 4730 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418392 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418406 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418418 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418427 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418436 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418446 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418457 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418467 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418476 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418485 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418494 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418503 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418512 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418521 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418529 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418537 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418545 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418553 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418561 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418569 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418577 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418584 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418592 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418600 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418608 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418617 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418624 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418633 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418641 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418649 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418656 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418664 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418671 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418679 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418687 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418697 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418705 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418716 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418727 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418738 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418757 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.418765 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420555 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420572 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420582 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420591 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420599 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420607 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420614 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420622 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420631 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420638 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420646 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420654 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420662 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420671 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420679 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420687 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420703 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420712 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420720 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420730 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420738 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420745 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420753 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420761 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420769 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420776 4730 feature_gate.go:330] unrecognized feature gate: Example Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420784 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420792 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.420800 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.421641 4730 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.435675 4730 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.435754 4730 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.435984 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436017 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436032 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436045 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436057 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436068 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436078 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436089 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436102 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436118 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436132 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436146 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436156 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436168 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436180 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436191 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436201 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436213 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436225 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436237 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436252 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436265 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436277 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436289 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436300 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436313 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436324 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436334 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436348 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436361 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436371 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436381 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436392 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436403 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436413 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436424 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436434 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436445 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436456 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436466 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436476 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436486 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436496 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436506 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436516 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436526 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436536 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436546 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436557 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436567 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436578 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436589 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436602 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436612 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436624 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436634 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436644 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436654 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436664 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436674 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436684 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436696 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436705 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436715 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436726 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436736 4730 feature_gate.go:330] unrecognized feature gate: Example Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436747 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436757 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436766 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436777 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.436787 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.436805 4730 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437213 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437238 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437250 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437261 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437271 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437285 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437298 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437311 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437322 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437333 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437344 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437355 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437365 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437374 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437384 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437394 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437404 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437413 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437423 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437433 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437443 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437452 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437461 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437471 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437481 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437490 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437500 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437510 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437520 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437530 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437540 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437550 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437559 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437569 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437579 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437590 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437599 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437612 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437621 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437631 4730 feature_gate.go:330] unrecognized feature gate: Example Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437641 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437650 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437660 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437701 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437714 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437728 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437738 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437748 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437759 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437770 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437782 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437792 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437802 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437812 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437822 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437835 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437848 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437859 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437870 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437881 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437892 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437905 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437916 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437927 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437937 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437980 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.437994 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.438004 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.438013 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.438023 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.438033 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.438049 4730 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.439378 4730 server.go:940] "Client rotation is on, will bootstrap in background" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.445792 4730 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.446012 4730 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.448029 4730 server.go:997] "Starting client certificate rotation" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.448081 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.449320 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-30 15:41:00.52888706 +0000 UTC Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.449475 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.475350 4730 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.479701 4730 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 21 00:06:38 crc kubenswrapper[4730]: E0221 00:06:38.482358 4730 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.494500 4730 log.go:25] "Validated CRI v1 runtime API" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.536077 4730 log.go:25] "Validated CRI v1 image API" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.538926 4730 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.544377 4730 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-21-00-00-55-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.544432 4730 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:43 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.576528 4730 manager.go:217] Machine: {Timestamp:2026-02-21 00:06:38.571816729 +0000 UTC m=+0.583383704 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:47786c44-bba6-409d-9771-9e2e16f93f54 BootID:c2d16590-4847-4316-a218-c611e1dabc66 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:43 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:7f:45:5a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:7f:45:5a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:bb:75:38 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:23:c6:80 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1f:2a:53 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f2:ab:31 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:32:e8:87:9f:2a:6d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d2:f3:7d:45:e5:0e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.577055 4730 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.577502 4730 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.579251 4730 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.579624 4730 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.579686 4730 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.580908 4730 topology_manager.go:138] "Creating topology manager with none policy" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.580977 4730 container_manager_linux.go:303] "Creating device plugin manager" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.581586 4730 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.581678 4730 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.582658 4730 state_mem.go:36] "Initialized new in-memory state store" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.582880 4730 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.588563 4730 kubelet.go:418] "Attempting to sync node with API server" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.588626 4730 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.588671 4730 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.588694 4730 kubelet.go:324] "Adding apiserver pod source" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.588719 4730 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.593177 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.593202 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:38 crc kubenswrapper[4730]: E0221 00:06:38.593338 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 21 00:06:38 crc kubenswrapper[4730]: E0221 00:06:38.593361 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.594880 4730 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.596300 4730 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.598045 4730 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.599629 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.599684 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.599709 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.599728 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.599757 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.599773 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.599789 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.599813 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.599831 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.599846 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.599884 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.599898 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.601865 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.602622 4730 server.go:1280] "Started kubelet" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.602791 4730 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.604134 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.603127 4730 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.605258 4730 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 21 00:06:38 crc systemd[1]: Started Kubernetes Kubelet. Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.607664 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.607775 4730 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.608614 4730 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.608641 4730 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 21 00:06:38 crc kubenswrapper[4730]: E0221 00:06:38.608703 4730 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.608828 4730 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.608835 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:05:18.359974177 +0000 UTC Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.609724 4730 server.go:460] "Adding debug handlers to kubelet server" Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.610277 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:38 crc kubenswrapper[4730]: E0221 00:06:38.610347 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 21 00:06:38 crc kubenswrapper[4730]: E0221 00:06:38.610332 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="200ms" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.611294 4730 factory.go:55] Registering systemd factory Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.612101 4730 factory.go:221] Registration of the systemd container factory successfully Feb 21 00:06:38 crc kubenswrapper[4730]: E0221 00:06:38.610540 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18961a47af47977b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-21 00:06:38.602573691 +0000 UTC m=+0.614140656,LastTimestamp:2026-02-21 00:06:38.602573691 +0000 UTC m=+0.614140656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.617213 4730 factory.go:153] Registering CRI-O factory Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.617265 4730 factory.go:221] Registration of the crio container factory successfully Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.617425 4730 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.617469 4730 factory.go:103] Registering Raw factory Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.617521 4730 manager.go:1196] Started watching for new ooms in manager Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.620384 4730 manager.go:319] Starting recovery of all containers Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.629803 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.629878 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.629890 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.629905 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.629918 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.629931 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630016 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630033 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630077 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630095 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630107 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630121 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630134 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630152 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630163 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630176 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630192 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630205 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630217 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630228 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630245 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630256 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630270 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630286 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630331 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630352 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630368 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630384 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630398 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630410 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630421 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630432 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630451 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630464 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630477 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630491 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630518 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630532 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630546 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630559 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630572 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630584 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630597 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630610 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630624 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630637 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630655 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630672 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630686 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630701 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630715 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630763 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630788 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630804 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630817 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630833 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630848 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630901 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630915 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.630930 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.632874 4730 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.632902 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.632937 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633025 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633038 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633052 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633066 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633080 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633094 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633107 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633119 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633133 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633145 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633160 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633171 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633184 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633196 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633209 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633221 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633238 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633257 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633270 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633283 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633297 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633312 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633325 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633339 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633351 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633374 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633387 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633400 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633412 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633424 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633438 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633449 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633461 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633474 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633487 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633499 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633511 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633525 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633537 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633550 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633564 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633600 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633622 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633633 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633646 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633658 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633669 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633681 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633696 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633710 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633723 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633735 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633747 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633758 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633771 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633784 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633798 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633810 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633822 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633832 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633844 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633855 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633866 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633878 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633891 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633903 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633914 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633925 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633951 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633967 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633978 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.633989 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634001 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634012 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634024 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634035 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634046 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634057 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634068 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634080 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634098 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634111 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634124 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634136 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634148 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634163 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634173 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634185 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634197 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634209 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634222 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634235 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634249 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634264 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634275 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634290 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634302 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634313 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634325 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634335 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634348 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634362 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634374 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634388 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634400 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634443 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634458 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634470 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634482 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634495 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634508 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634520 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634534 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634545 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634561 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634572 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634584 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634599 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634611 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634623 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634635 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634648 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634662 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634674 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634688 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634701 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634714 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634728 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634740 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634753 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634767 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634780 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634793 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634806 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634820 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634833 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634846 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634859 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634871 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634884 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634897 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634910 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634922 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634936 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634962 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634976 4730 reconstruct.go:97] "Volume reconstruction finished" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.634985 4730 reconciler.go:26] "Reconciler: start to sync state" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.656078 4730 manager.go:324] Recovery completed Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.675663 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.678509 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.678570 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.678584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.679696 4730 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.679725 4730 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.679755 4730 state_mem.go:36] "Initialized new in-memory state store" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.687812 4730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.691848 4730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.691980 4730 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.692036 4730 kubelet.go:2335] "Starting kubelet main sync loop" Feb 21 00:06:38 crc kubenswrapper[4730]: E0221 00:06:38.692130 4730 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.694458 4730 policy_none.go:49] "None policy: Start" Feb 21 00:06:38 crc kubenswrapper[4730]: W0221 00:06:38.694730 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:38 crc kubenswrapper[4730]: E0221 00:06:38.694852 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.696039 4730 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.696105 4730 state_mem.go:35] "Initializing new in-memory state store" Feb 21 00:06:38 crc kubenswrapper[4730]: E0221 00:06:38.709408 4730 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.779202 4730 manager.go:334] "Starting Device Plugin manager" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.779272 4730 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.779293 4730 server.go:79] "Starting device plugin registration server" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.779879 4730 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.779906 4730 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.780132 4730 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.780392 4730 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.780423 4730 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 21 00:06:38 crc kubenswrapper[4730]: E0221 00:06:38.790715 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.793003 4730 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.793234 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.795639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.795698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.795717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.796005 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.796871 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.796930 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.798370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.798424 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.798437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.798538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.798633 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.798662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.799145 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.799638 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.799704 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.802773 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.802806 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.802819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.802981 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.803171 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.803205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.803227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.803207 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.803516 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.805581 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.805658 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.805797 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.806461 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.806504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.806525 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.806863 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.807478 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.807596 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.809901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.810007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.810027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.810488 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.810639 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.810828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.810889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.810913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:38 crc kubenswrapper[4730]: E0221 00:06:38.811267 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="400ms" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.812407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.812464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.812485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.837796 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.837857 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.837886 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.837970 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.838007 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.838042 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.838123 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.838197 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.838247 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.838287 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.838322 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.838408 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.838480 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.838519 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.838549 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.880603 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.881967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.882004 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.882014 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.882040 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 00:06:38 crc kubenswrapper[4730]: E0221 00:06:38.882613 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940033 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940089 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940111 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940128 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940150 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940170 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940187 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940210 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940230 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940244 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940261 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940278 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940296 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940313 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940329 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940715 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940764 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940784 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940806 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940830 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940850 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940871 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940890 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940908 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940930 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.940965 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.941074 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.941066 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.941110 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 00:06:38 crc kubenswrapper[4730]: I0221 00:06:38.941089 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.083215 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.085234 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.085300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.085318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.085355 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 00:06:39 crc kubenswrapper[4730]: E0221 00:06:39.086273 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.133152 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.157304 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.168410 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 00:06:39 crc kubenswrapper[4730]: W0221 00:06:39.182842 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-96da1afc03299cc8b5dbbaa30f9b2752a712ab8de3df6d94d5f74b6a3dd28ae0 WatchSource:0}: Error finding container 96da1afc03299cc8b5dbbaa30f9b2752a712ab8de3df6d94d5f74b6a3dd28ae0: Status 404 returned error can't find the container with id 96da1afc03299cc8b5dbbaa30f9b2752a712ab8de3df6d94d5f74b6a3dd28ae0 Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.189309 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.200831 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 21 00:06:39 crc kubenswrapper[4730]: W0221 00:06:39.201482 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ffe852864f6f2955b6eb7dc83650efda7f82a3876ba52edd0a0e4e5c0aeb7511 WatchSource:0}: Error finding container ffe852864f6f2955b6eb7dc83650efda7f82a3876ba52edd0a0e4e5c0aeb7511: Status 404 returned error can't find the container with id ffe852864f6f2955b6eb7dc83650efda7f82a3876ba52edd0a0e4e5c0aeb7511 Feb 21 00:06:39 crc kubenswrapper[4730]: W0221 00:06:39.205755 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-df02608baef6afa91241ac9f7e9531a9656077d2eacc9ac3bf11725d01a9e5b3 WatchSource:0}: Error finding container df02608baef6afa91241ac9f7e9531a9656077d2eacc9ac3bf11725d01a9e5b3: Status 404 returned error can't find the container with id df02608baef6afa91241ac9f7e9531a9656077d2eacc9ac3bf11725d01a9e5b3 Feb 21 00:06:39 crc kubenswrapper[4730]: E0221 00:06:39.213210 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="800ms" Feb 21 00:06:39 crc kubenswrapper[4730]: W0221 00:06:39.231394 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-7273f4f5f739a063333652f5b5be5b36e36f8d7fa0a30232faffabe8e6a3a58f WatchSource:0}: Error finding container 7273f4f5f739a063333652f5b5be5b36e36f8d7fa0a30232faffabe8e6a3a58f: Status 404 returned error can't find the container with id 7273f4f5f739a063333652f5b5be5b36e36f8d7fa0a30232faffabe8e6a3a58f Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.487163 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.489592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.489655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.489668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.489695 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 00:06:39 crc kubenswrapper[4730]: E0221 00:06:39.490257 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 21 00:06:39 crc kubenswrapper[4730]: W0221 00:06:39.503755 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:39 crc kubenswrapper[4730]: E0221 00:06:39.503845 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 21 00:06:39 crc kubenswrapper[4730]: W0221 00:06:39.525254 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:39 crc kubenswrapper[4730]: E0221 00:06:39.525350 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.605541 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.609529 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:16:13.527675232 +0000 UTC Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.700514 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"df02608baef6afa91241ac9f7e9531a9656077d2eacc9ac3bf11725d01a9e5b3"} Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.701994 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ffe852864f6f2955b6eb7dc83650efda7f82a3876ba52edd0a0e4e5c0aeb7511"} Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.704178 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"96da1afc03299cc8b5dbbaa30f9b2752a712ab8de3df6d94d5f74b6a3dd28ae0"} Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.706278 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7273f4f5f739a063333652f5b5be5b36e36f8d7fa0a30232faffabe8e6a3a58f"} Feb 21 00:06:39 crc kubenswrapper[4730]: I0221 00:06:39.707248 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5e4b0c11655cebb9d828cc2e67d8010f04cfc3282ff405cadf048e8b3317447e"} Feb 21 00:06:39 crc kubenswrapper[4730]: W0221 00:06:39.776756 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:39 crc kubenswrapper[4730]: E0221 00:06:39.776900 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 21 00:06:39 crc kubenswrapper[4730]: W0221 00:06:39.800783 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:39 crc kubenswrapper[4730]: E0221 00:06:39.800917 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 21 00:06:40 crc kubenswrapper[4730]: E0221 00:06:40.014307 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="1.6s" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.290566 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.292723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.292845 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.292867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.292912 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 00:06:40 crc kubenswrapper[4730]: E0221 00:06:40.293657 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.556766 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 21 00:06:40 crc kubenswrapper[4730]: E0221 00:06:40.558742 4730 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.605668 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.609696 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:28:15.061365411 +0000 UTC Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.714222 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc" exitCode=0 Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.714353 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.714446 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc"} Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.715700 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.715748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.715763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.718118 4730 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722" exitCode=0 Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.718186 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722"} Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.718263 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.719643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.719675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.719687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.720066 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.721918 4730 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8ecbced28fb6567dd86d0e98d25258c52dea061de5510e1ddab1c8a944507e96" exitCode=0 Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.722010 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8ecbced28fb6567dd86d0e98d25258c52dea061de5510e1ddab1c8a944507e96"} Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.722094 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.722551 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.722577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.722590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.723131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.723171 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.723187 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.726271 4730 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f" exitCode=0 Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.726354 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f"} Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.726533 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.729141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.729200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.729235 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.738260 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e"} Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.738324 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec"} Feb 21 00:06:40 crc kubenswrapper[4730]: I0221 00:06:40.738348 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f"} Feb 21 00:06:41 crc kubenswrapper[4730]: E0221 00:06:41.163398 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18961a47af47977b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-21 00:06:38.602573691 +0000 UTC m=+0.614140656,LastTimestamp:2026-02-21 00:06:38.602573691 +0000 UTC m=+0.614140656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 21 00:06:41 crc kubenswrapper[4730]: W0221 00:06:41.426514 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:41 crc kubenswrapper[4730]: E0221 00:06:41.426639 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 21 00:06:41 crc kubenswrapper[4730]: W0221 00:06:41.572054 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:41 crc kubenswrapper[4730]: E0221 00:06:41.572207 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.605406 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.610681 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:58:29.507486041 +0000 UTC Feb 21 00:06:41 crc kubenswrapper[4730]: E0221 00:06:41.615483 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="3.2s" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.744136 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b5b411a288944c0c29798f478163a824e3cb80076e49c0e23695b4318f872c14"} Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.744234 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f0a544d47c97f806a94461291f56bea6eb69c99909d44bd61b50787beddda269"} Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.744248 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.744258 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"91fed08b7ba240311a68caae197a7a3643593c810b92f5c6d10c28584084d920"} Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.745714 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.745757 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.745790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.747306 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f"} Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.747471 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.748295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.748320 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.748329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.750294 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2"} Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.750318 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82"} Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.750329 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993"} Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.750338 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08"} Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.751273 4730 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5" exitCode=0 Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.751316 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5"} Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.751401 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.751934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.751969 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.751977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.759183 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1b1a4186f767d15fe95a75c03b53e279537c7e7d534cffec8f37ddab3962dc74"} Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.759229 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.771253 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.771315 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.771328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.894199 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.895856 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.895908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.895928 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:41 crc kubenswrapper[4730]: I0221 00:06:41.896008 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 00:06:41 crc kubenswrapper[4730]: E0221 00:06:41.896561 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 21 00:06:42 crc kubenswrapper[4730]: W0221 00:06:42.060983 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 21 00:06:42 crc kubenswrapper[4730]: E0221 00:06:42.061130 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.611244 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:30:03.582327331 +0000 UTC Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.736912 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.769649 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a"} Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.769790 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.771810 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.771871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.771893 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.777499 4730 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96" exitCode=0 Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.777709 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96"} Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.777767 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.777737 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.777834 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.777746 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.780178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.780256 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.780274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.780305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.780343 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.780365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.780274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.780414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.780433 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.780816 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.780863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.780881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:42 crc kubenswrapper[4730]: I0221 00:06:42.842546 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.527498 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.611579 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:46:36.378667556 +0000 UTC Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.793338 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f"} Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.793407 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96"} Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.793424 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73"} Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.793462 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.793483 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.793549 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.794058 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.795421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.795469 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.795481 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.796844 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.796876 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.796889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.796856 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.796998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:43 crc kubenswrapper[4730]: I0221 00:06:43.797011 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.289275 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.612674 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:06:11.721873229 +0000 UTC Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.732867 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.802091 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69"} Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.802174 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36"} Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.802211 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.802277 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.802305 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.802228 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.804105 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.804165 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.804182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.804196 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.804241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.804258 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.804207 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.804423 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:44 crc kubenswrapper[4730]: I0221 00:06:44.804441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.097324 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.099361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.099414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.099434 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.099479 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.204131 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.613312 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:59:07.859214467 +0000 UTC Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.763894 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.806262 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.806284 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.808413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.808457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.808471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.809328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.809399 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:45 crc kubenswrapper[4730]: I0221 00:06:45.809423 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:46 crc kubenswrapper[4730]: I0221 00:06:46.318820 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 21 00:06:46 crc kubenswrapper[4730]: I0221 00:06:46.613856 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:35:12.093580083 +0000 UTC Feb 21 00:06:46 crc kubenswrapper[4730]: I0221 00:06:46.810362 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:46 crc kubenswrapper[4730]: I0221 00:06:46.810513 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:46 crc kubenswrapper[4730]: I0221 00:06:46.811320 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:46 crc kubenswrapper[4730]: I0221 00:06:46.811373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:46 crc kubenswrapper[4730]: I0221 00:06:46.811390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:46 crc kubenswrapper[4730]: I0221 00:06:46.811839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:46 crc kubenswrapper[4730]: I0221 00:06:46.811909 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:46 crc kubenswrapper[4730]: I0221 00:06:46.812005 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:47 crc kubenswrapper[4730]: I0221 00:06:47.612913 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:47 crc kubenswrapper[4730]: I0221 00:06:47.613339 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:47 crc kubenswrapper[4730]: I0221 00:06:47.614152 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:25:31.256287409 +0000 UTC Feb 21 00:06:47 crc kubenswrapper[4730]: I0221 00:06:47.615036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:47 crc kubenswrapper[4730]: I0221 00:06:47.615161 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:47 crc kubenswrapper[4730]: I0221 00:06:47.615240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:47 crc kubenswrapper[4730]: I0221 00:06:47.621244 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:47 crc kubenswrapper[4730]: I0221 00:06:47.812428 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:47 crc kubenswrapper[4730]: I0221 00:06:47.813689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:47 crc kubenswrapper[4730]: I0221 00:06:47.813935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:47 crc kubenswrapper[4730]: I0221 00:06:47.814152 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:48 crc kubenswrapper[4730]: I0221 00:06:48.615019 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:42:19.675308886 +0000 UTC Feb 21 00:06:48 crc kubenswrapper[4730]: E0221 00:06:48.792163 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 21 00:06:49 crc kubenswrapper[4730]: I0221 00:06:49.293205 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 21 00:06:49 crc kubenswrapper[4730]: I0221 00:06:49.293487 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:49 crc kubenswrapper[4730]: I0221 00:06:49.294940 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:49 crc kubenswrapper[4730]: I0221 00:06:49.295157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:49 crc kubenswrapper[4730]: I0221 00:06:49.295222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:49 crc kubenswrapper[4730]: I0221 00:06:49.615795 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:21:57.951285319 +0000 UTC Feb 21 00:06:50 crc kubenswrapper[4730]: I0221 00:06:50.503754 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:50 crc kubenswrapper[4730]: I0221 00:06:50.504056 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:50 crc kubenswrapper[4730]: I0221 00:06:50.508436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:50 crc kubenswrapper[4730]: I0221 00:06:50.508504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:50 crc kubenswrapper[4730]: I0221 00:06:50.508530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:50 crc kubenswrapper[4730]: I0221 00:06:50.512813 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:06:50 crc kubenswrapper[4730]: I0221 00:06:50.616902 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 19:22:34.045211616 +0000 UTC Feb 21 00:06:50 crc kubenswrapper[4730]: I0221 00:06:50.822312 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:50 crc kubenswrapper[4730]: I0221 00:06:50.823806 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:50 crc kubenswrapper[4730]: I0221 00:06:50.823859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:50 crc kubenswrapper[4730]: I0221 00:06:50.823876 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:51 crc kubenswrapper[4730]: I0221 00:06:51.617747 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:35:52.110006114 +0000 UTC Feb 21 00:06:52 crc kubenswrapper[4730]: W0221 00:06:52.396288 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 21 00:06:52 crc kubenswrapper[4730]: I0221 00:06:52.396440 4730 trace.go:236] Trace[2025922177]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Feb-2026 00:06:42.394) (total time: 10001ms): Feb 21 00:06:52 crc kubenswrapper[4730]: Trace[2025922177]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:52.396) Feb 21 00:06:52 crc kubenswrapper[4730]: Trace[2025922177]: [10.001869257s] [10.001869257s] END Feb 21 00:06:52 crc kubenswrapper[4730]: E0221 00:06:52.396474 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 21 00:06:52 crc kubenswrapper[4730]: I0221 00:06:52.606602 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 21 00:06:52 crc kubenswrapper[4730]: I0221 00:06:52.619157 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 06:06:55.996098354 +0000 UTC Feb 21 00:06:52 crc kubenswrapper[4730]: I0221 00:06:52.830267 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 21 00:06:52 crc kubenswrapper[4730]: I0221 00:06:52.830355 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 21 00:06:52 crc kubenswrapper[4730]: I0221 00:06:52.840566 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 21 00:06:52 crc kubenswrapper[4730]: I0221 00:06:52.840649 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 21 00:06:53 crc kubenswrapper[4730]: I0221 00:06:53.503781 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 21 00:06:53 crc kubenswrapper[4730]: I0221 00:06:53.503936 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 21 00:06:53 crc kubenswrapper[4730]: I0221 00:06:53.619831 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:38:29.529637246 +0000 UTC Feb 21 00:06:54 crc kubenswrapper[4730]: I0221 00:06:54.620806 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:07:50.981063314 +0000 UTC Feb 21 00:06:55 crc kubenswrapper[4730]: I0221 00:06:55.621985 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:50:40.211465228 +0000 UTC Feb 21 00:06:55 crc kubenswrapper[4730]: I0221 00:06:55.774571 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:55 crc kubenswrapper[4730]: I0221 00:06:55.775159 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:55 crc kubenswrapper[4730]: I0221 00:06:55.776850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:55 crc kubenswrapper[4730]: I0221 00:06:55.776981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:55 crc kubenswrapper[4730]: I0221 00:06:55.777010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:55 crc kubenswrapper[4730]: I0221 00:06:55.782914 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:55 crc kubenswrapper[4730]: I0221 00:06:55.839548 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 00:06:55 crc kubenswrapper[4730]: I0221 00:06:55.839658 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:06:55 crc kubenswrapper[4730]: I0221 00:06:55.841308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:06:55 crc kubenswrapper[4730]: I0221 00:06:55.841379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:06:55 crc kubenswrapper[4730]: I0221 00:06:55.841397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:06:56 crc kubenswrapper[4730]: I0221 00:06:56.209985 4730 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 21 00:06:56 crc kubenswrapper[4730]: I0221 00:06:56.622513 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:39:02.780985191 +0000 UTC Feb 21 00:06:57 crc kubenswrapper[4730]: I0221 00:06:57.622801 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:41:20.677230864 +0000 UTC Feb 21 00:06:57 crc kubenswrapper[4730]: E0221 00:06:57.831430 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 21 00:06:57 crc kubenswrapper[4730]: I0221 00:06:57.833979 4730 trace.go:236] Trace[1965148816]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Feb-2026 00:06:46.566) (total time: 11267ms): Feb 21 00:06:57 crc kubenswrapper[4730]: Trace[1965148816]: ---"Objects listed" error: 11267ms (00:06:57.833) Feb 21 00:06:57 crc kubenswrapper[4730]: Trace[1965148816]: [11.267136841s] [11.267136841s] END Feb 21 00:06:57 crc kubenswrapper[4730]: I0221 00:06:57.834010 4730 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 21 00:06:57 crc kubenswrapper[4730]: I0221 00:06:57.835287 4730 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 21 00:06:57 crc kubenswrapper[4730]: I0221 00:06:57.835367 4730 trace.go:236] Trace[1103440245]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Feb-2026 00:06:46.299) (total time: 11535ms): Feb 21 00:06:57 crc kubenswrapper[4730]: Trace[1103440245]: ---"Objects listed" error: 11535ms (00:06:57.835) Feb 21 00:06:57 crc kubenswrapper[4730]: Trace[1103440245]: [11.535947837s] [11.535947837s] END Feb 21 00:06:57 crc kubenswrapper[4730]: I0221 00:06:57.835396 4730 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 21 00:06:57 crc kubenswrapper[4730]: E0221 00:06:57.837365 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 21 00:06:57 crc kubenswrapper[4730]: I0221 00:06:57.840733 4730 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 21 00:06:57 crc kubenswrapper[4730]: I0221 00:06:57.840844 4730 trace.go:236] Trace[981214278]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Feb-2026 00:06:45.403) (total time: 12436ms): Feb 21 00:06:57 crc kubenswrapper[4730]: Trace[981214278]: ---"Objects listed" error: 12436ms (00:06:57.840) Feb 21 00:06:57 crc kubenswrapper[4730]: Trace[981214278]: [12.436971386s] [12.436971386s] END Feb 21 00:06:57 crc kubenswrapper[4730]: I0221 00:06:57.840867 4730 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 21 00:06:57 crc kubenswrapper[4730]: I0221 00:06:57.870393 4730 csr.go:261] certificate signing request csr-fcg4r is approved, waiting to be issued Feb 21 00:06:57 crc kubenswrapper[4730]: I0221 00:06:57.884285 4730 csr.go:257] certificate signing request csr-fcg4r is issued Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.147435 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58704->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.147502 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58704->192.168.126.11:17697: read: connection reset by peer" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.147528 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58714->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.147599 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58714->192.168.126.11:17697: read: connection reset by peer" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.147917 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.147989 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.448411 4730 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 21 00:06:58 crc kubenswrapper[4730]: W0221 00:06:58.448642 4730 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 21 00:06:58 crc kubenswrapper[4730]: W0221 00:06:58.448733 4730 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 21 00:06:58 crc kubenswrapper[4730]: W0221 00:06:58.448735 4730 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.448843 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.164:48084->38.102.83.164:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18961a47d4c4f7cf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-21 00:06:39.231547343 +0000 UTC m=+1.243114278,LastTimestamp:2026-02-21 00:06:39.231547343 +0000 UTC m=+1.243114278,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.601640 4730 apiserver.go:52] "Watching apiserver" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.605868 4730 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.606225 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.606778 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.606837 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.606900 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.606935 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.607032 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.607099 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.607126 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.607138 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.607416 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.609521 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.609681 4730 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.609998 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.610609 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.610762 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.613088 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.613237 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.614018 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.614273 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.616377 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.622928 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:35:59.202877752 +0000 UTC Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.641928 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.641978 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642002 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642023 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642043 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642058 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642075 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642100 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642115 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642128 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642143 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642158 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642175 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642190 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642204 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642218 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642232 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642248 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642265 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642279 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642296 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642310 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642324 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642365 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642398 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642413 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642429 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642416 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642445 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642540 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642583 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642631 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642665 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642682 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642700 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642736 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642770 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642803 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642829 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642840 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642877 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642910 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.642973 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643012 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643050 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643084 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643116 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643153 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643187 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643223 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643262 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643315 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643363 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643404 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643438 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643472 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643505 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643539 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643574 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643589 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643606 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643608 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643643 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643679 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643713 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643750 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643785 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643821 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643913 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.643981 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644030 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644056 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644052 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644137 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644168 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644195 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644221 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644244 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644267 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644296 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644320 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644342 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644367 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644395 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644391 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644421 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644444 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644466 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644486 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644490 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644545 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644568 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644613 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644629 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644635 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644645 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644717 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644724 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644739 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644755 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644793 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644807 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644827 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644872 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644882 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644907 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644975 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.644978 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645021 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645040 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645047 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645119 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645173 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645225 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645267 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645302 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645337 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645428 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645478 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645516 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645562 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645611 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645661 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645713 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645768 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645819 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645866 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645939 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646032 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646084 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646135 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646184 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646233 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646281 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646329 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646383 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646432 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646480 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646652 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646692 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646728 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646763 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646795 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646828 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646864 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646913 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646995 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647049 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647083 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647118 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647178 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647220 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647269 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647304 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647338 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647373 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647409 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647442 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647476 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647529 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647563 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647594 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647625 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647661 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647696 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647731 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647766 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647801 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647861 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647897 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.647938 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648308 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648347 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648382 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648416 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648459 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648495 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648529 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648563 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648597 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648631 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648664 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648699 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648734 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648768 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648802 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648835 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648871 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648906 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648972 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649007 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649041 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649076 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649110 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649146 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649181 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649216 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649252 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649286 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649320 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649357 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649392 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649428 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649463 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649501 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649536 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649572 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649605 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649642 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649678 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649714 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649749 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649813 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649855 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649891 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649930 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649994 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650034 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650088 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650143 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650190 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650238 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650295 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650336 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650374 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650492 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650609 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650636 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650657 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650682 4730 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650705 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650728 4730 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650748 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650772 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650796 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650817 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650840 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650862 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650882 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650903 4730 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650923 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650972 4730 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650995 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.653208 4730 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645232 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645567 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645592 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645650 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645815 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645855 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645894 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.645987 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646459 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646505 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646510 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646521 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646672 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646838 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646844 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.646853 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648173 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648425 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648666 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649102 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.648888 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649242 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.649909 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650201 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650245 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.664614 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650685 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650788 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.650831 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.651163 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.651170 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.651089 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.653158 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.653106 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.653324 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.653359 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.653937 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.654253 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.654534 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.654553 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.654611 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.655001 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.655548 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.655581 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.655758 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.655788 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.656130 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.656208 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.656213 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.656547 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.656821 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.657099 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.657278 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.659207 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.659371 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.660275 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.660486 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.660586 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.660662 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.662126 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.662435 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.662497 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.662984 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.663014 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.663090 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.663218 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.663362 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.663413 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.663507 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.663700 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.663974 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.664029 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.664137 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.664157 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.664246 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.665274 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.665735 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.666311 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.666334 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.665726 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.666591 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.666619 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.666634 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.666998 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.667050 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.667108 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.667253 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.667301 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.667435 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.667352 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.667842 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.668309 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.668374 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.669315 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.670843 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.670885 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.671116 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.671349 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.671572 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.672373 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.672702 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.672815 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.673045 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.673360 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.674125 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.674160 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.674271 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.674482 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.674686 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.674693 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.674786 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.674854 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.674914 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.674961 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.675005 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.675072 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.675098 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.675142 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.675364 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.675466 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.675750 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.675722 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.676727 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.661185 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.677036 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.677121 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:06:59.177098801 +0000 UTC m=+21.188665726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.677199 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.677447 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.677894 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.678007 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.678234 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.678350 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:06:59.178317746 +0000 UTC m=+21.189884681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.678568 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.678676 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.678763 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:06:59.178751986 +0000 UTC m=+21.190318911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.679225 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.679380 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.679627 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.679245 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.680036 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.680416 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.680459 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.680601 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.681044 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.682098 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.682206 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.684926 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.685256 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.685732 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.686005 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.689043 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.689388 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.691445 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.691657 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.691740 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.693575 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.695626 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.696345 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.697180 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.697256 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.698150 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.698753 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.700918 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.701075 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.701203 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.701246 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.701515 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.701936 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.697982 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.702014 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.703497 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.703638 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.703660 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.703673 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.703727 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 00:06:59.203711087 +0000 UTC m=+21.215278022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.704425 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.704540 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.705249 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.705342 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.705371 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:06:58 crc kubenswrapper[4730]: E0221 00:06:58.705529 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 00:06:59.205492824 +0000 UTC m=+21.217059759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.706755 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.706912 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.707038 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.707301 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.707599 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.707766 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.708351 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.708393 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.708607 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.708886 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.709917 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.710358 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.711152 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.712534 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.713442 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.713435 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.715339 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.715817 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.717156 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.717264 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.724677 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.726182 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.726984 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.728724 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.729285 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.729888 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.731185 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.733189 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.733770 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.734342 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.735439 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.735993 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.737015 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.737618 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.738713 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.739416 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.739813 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.740785 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.741357 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.741798 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.742830 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.742846 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.743292 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.744292 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.744693 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.745685 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.746362 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.747230 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.747763 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.748090 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.748244 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.749043 4730 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.749141 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.750699 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.751513 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.751902 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.752416 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.752494 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.752558 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.752598 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.753024 4730 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.753072 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.753087 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.753102 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.753154 4730 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.753169 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.753498 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.754569 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755095 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755167 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755206 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755218 4730 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755230 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755240 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755250 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755259 4730 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755268 4730 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755278 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755287 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755297 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755306 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755315 4730 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755325 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755335 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755346 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755356 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755366 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755375 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755384 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755393 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755403 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755412 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755421 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755429 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755437 4730 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755447 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755466 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755475 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755483 4730 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755491 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755500 4730 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755509 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755518 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755526 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755535 4730 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755544 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755552 4730 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755561 4730 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755570 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755579 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755588 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755596 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755605 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755616 4730 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755624 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755633 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755642 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755652 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755661 4730 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755670 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755678 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755686 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755695 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755704 4730 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755720 4730 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755728 4730 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755736 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755745 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755756 4730 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755764 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755773 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755783 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755791 4730 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755799 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755821 4730 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755830 4730 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755839 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755850 4730 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755865 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755873 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755883 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755894 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755903 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755912 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755920 4730 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755928 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755936 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755979 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755988 4730 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.755997 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756005 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756013 4730 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756022 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756030 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756038 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756046 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756055 4730 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756064 4730 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756071 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756079 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756086 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756095 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756103 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756112 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756121 4730 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756128 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756137 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756144 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756156 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756165 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756175 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756184 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756192 4730 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756200 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756208 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756215 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756223 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756231 4730 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756240 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756249 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756257 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756265 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756275 4730 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756282 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756290 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756298 4730 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756306 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756314 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756322 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756331 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756342 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756350 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756360 4730 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756368 4730 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756377 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756386 4730 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756395 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756403 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756412 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756420 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756428 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756437 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756446 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756455 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756464 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756473 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756482 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756490 4730 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756504 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756512 4730 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756521 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756529 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756537 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756545 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756553 4730 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756562 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756570 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756580 4730 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756588 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756596 4730 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756604 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756612 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756621 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756629 4730 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756692 4730 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756702 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756710 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756722 4730 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756731 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756740 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756749 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756757 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756766 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756793 4730 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756802 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756810 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756835 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756859 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756868 4730 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.756878 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.757120 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.759238 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.760980 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.761385 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.762414 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.763270 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.764270 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.764734 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.765668 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.766518 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.767618 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.768154 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.769060 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.770975 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.771913 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.772272 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.774512 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.775130 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.791548 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.806353 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.815984 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.831941 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.849071 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.849399 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.850586 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a" exitCode=255 Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.850625 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a"} Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.865839 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.870291 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.870545 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lbr58"] Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.871013 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lbr58" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.871625 4730 scope.go:117] "RemoveContainer" containerID="39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.874338 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.874609 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.876604 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-rwggg"] Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.877068 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rwggg" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.878525 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.878790 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.879344 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.879507 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.885803 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-21 00:01:57 +0000 UTC, rotation deadline is 2027-01-03 15:03:44.505433921 +0000 UTC Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.885866 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7598h56m45.619569891s for next certificate rotation Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.886862 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.895388 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.910114 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.931902 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.945279 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.961139 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.961223 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7c446ed-2321-4ed4-a768-17e71bc811ef-serviceca\") pod \"node-ca-lbr58\" (UID: \"d7c446ed-2321-4ed4-a768-17e71bc811ef\") " pod="openshift-image-registry/node-ca-lbr58" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.961256 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-566pz\" (UniqueName: \"kubernetes.io/projected/d7c446ed-2321-4ed4-a768-17e71bc811ef-kube-api-access-566pz\") pod \"node-ca-lbr58\" (UID: \"d7c446ed-2321-4ed4-a768-17e71bc811ef\") " pod="openshift-image-registry/node-ca-lbr58" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.961274 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvjfd\" (UniqueName: \"kubernetes.io/projected/805246ab-eb54-4142-bdc1-cd658cfb3615-kube-api-access-bvjfd\") pod \"node-resolver-rwggg\" (UID: \"805246ab-eb54-4142-bdc1-cd658cfb3615\") " pod="openshift-dns/node-resolver-rwggg" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.961298 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/805246ab-eb54-4142-bdc1-cd658cfb3615-hosts-file\") pod \"node-resolver-rwggg\" (UID: \"805246ab-eb54-4142-bdc1-cd658cfb3615\") " pod="openshift-dns/node-resolver-rwggg" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.961327 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7c446ed-2321-4ed4-a768-17e71bc811ef-host\") pod \"node-ca-lbr58\" (UID: \"d7c446ed-2321-4ed4-a768-17e71bc811ef\") " pod="openshift-image-registry/node-ca-lbr58" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.963079 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.972709 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.976510 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 00:06:58 crc kubenswrapper[4730]: I0221 00:06:58.989609 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.004159 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.024395 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.033395 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.043763 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.061658 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/805246ab-eb54-4142-bdc1-cd658cfb3615-hosts-file\") pod \"node-resolver-rwggg\" (UID: \"805246ab-eb54-4142-bdc1-cd658cfb3615\") " pod="openshift-dns/node-resolver-rwggg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.061723 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7c446ed-2321-4ed4-a768-17e71bc811ef-host\") pod \"node-ca-lbr58\" (UID: \"d7c446ed-2321-4ed4-a768-17e71bc811ef\") " pod="openshift-image-registry/node-ca-lbr58" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.061746 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7c446ed-2321-4ed4-a768-17e71bc811ef-serviceca\") pod \"node-ca-lbr58\" (UID: \"d7c446ed-2321-4ed4-a768-17e71bc811ef\") " pod="openshift-image-registry/node-ca-lbr58" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.061768 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-566pz\" (UniqueName: \"kubernetes.io/projected/d7c446ed-2321-4ed4-a768-17e71bc811ef-kube-api-access-566pz\") pod \"node-ca-lbr58\" (UID: \"d7c446ed-2321-4ed4-a768-17e71bc811ef\") " pod="openshift-image-registry/node-ca-lbr58" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.061796 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvjfd\" (UniqueName: \"kubernetes.io/projected/805246ab-eb54-4142-bdc1-cd658cfb3615-kube-api-access-bvjfd\") pod \"node-resolver-rwggg\" (UID: \"805246ab-eb54-4142-bdc1-cd658cfb3615\") " pod="openshift-dns/node-resolver-rwggg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.062142 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/805246ab-eb54-4142-bdc1-cd658cfb3615-hosts-file\") pod \"node-resolver-rwggg\" (UID: \"805246ab-eb54-4142-bdc1-cd658cfb3615\") " pod="openshift-dns/node-resolver-rwggg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.062255 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7c446ed-2321-4ed4-a768-17e71bc811ef-host\") pod \"node-ca-lbr58\" (UID: \"d7c446ed-2321-4ed4-a768-17e71bc811ef\") " pod="openshift-image-registry/node-ca-lbr58" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.063262 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7c446ed-2321-4ed4-a768-17e71bc811ef-serviceca\") pod \"node-ca-lbr58\" (UID: \"d7c446ed-2321-4ed4-a768-17e71bc811ef\") " pod="openshift-image-registry/node-ca-lbr58" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.079710 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvjfd\" (UniqueName: \"kubernetes.io/projected/805246ab-eb54-4142-bdc1-cd658cfb3615-kube-api-access-bvjfd\") pod \"node-resolver-rwggg\" (UID: \"805246ab-eb54-4142-bdc1-cd658cfb3615\") " pod="openshift-dns/node-resolver-rwggg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.090201 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-566pz\" (UniqueName: \"kubernetes.io/projected/d7c446ed-2321-4ed4-a768-17e71bc811ef-kube-api-access-566pz\") pod \"node-ca-lbr58\" (UID: \"d7c446ed-2321-4ed4-a768-17e71bc811ef\") " pod="openshift-image-registry/node-ca-lbr58" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.183390 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lbr58" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.191037 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rwggg" Feb 21 00:06:59 crc kubenswrapper[4730]: W0221 00:06:59.209458 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7c446ed_2321_4ed4_a768_17e71bc811ef.slice/crio-65e14250e275e00b9228709cc76dcf4f4518e99b987b39445c41141a4ab94c6f WatchSource:0}: Error finding container 65e14250e275e00b9228709cc76dcf4f4518e99b987b39445c41141a4ab94c6f: Status 404 returned error can't find the container with id 65e14250e275e00b9228709cc76dcf4f4518e99b987b39445c41141a4ab94c6f Feb 21 00:06:59 crc kubenswrapper[4730]: W0221 00:06:59.210345 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod805246ab_eb54_4142_bdc1_cd658cfb3615.slice/crio-c5b3e41ccc6e4a55161a08d7ef419cd3a2b852f4637287f82dc0010c4bfcd399 WatchSource:0}: Error finding container c5b3e41ccc6e4a55161a08d7ef419cd3a2b852f4637287f82dc0010c4bfcd399: Status 404 returned error can't find the container with id c5b3e41ccc6e4a55161a08d7ef419cd3a2b852f4637287f82dc0010c4bfcd399 Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.263811 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.263883 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.263908 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.263927 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.263965 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:06:59 crc kubenswrapper[4730]: E0221 00:06:59.264072 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:06:59 crc kubenswrapper[4730]: E0221 00:06:59.264093 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:06:59 crc kubenswrapper[4730]: E0221 00:06:59.264105 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:06:59 crc kubenswrapper[4730]: E0221 00:06:59.264149 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:00.264137113 +0000 UTC m=+22.275704048 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:06:59 crc kubenswrapper[4730]: E0221 00:06:59.264212 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:07:00.264206595 +0000 UTC m=+22.275773530 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:06:59 crc kubenswrapper[4730]: E0221 00:06:59.264266 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:06:59 crc kubenswrapper[4730]: E0221 00:06:59.264293 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:06:59 crc kubenswrapper[4730]: E0221 00:06:59.264344 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:06:59 crc kubenswrapper[4730]: E0221 00:06:59.264387 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:06:59 crc kubenswrapper[4730]: E0221 00:06:59.264404 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:06:59 crc kubenswrapper[4730]: E0221 00:06:59.264355 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:00.264336987 +0000 UTC m=+22.275903922 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:06:59 crc kubenswrapper[4730]: E0221 00:06:59.264458 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:00.264425619 +0000 UTC m=+22.275992554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:06:59 crc kubenswrapper[4730]: E0221 00:06:59.264477 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:00.26446845 +0000 UTC m=+22.276035375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.322141 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.332595 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.335036 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.342656 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.352110 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.361058 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.379346 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.393452 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.403830 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.419144 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.434786 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.447883 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.454445 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.462230 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.483220 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.492678 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.502058 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.518553 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.528280 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.538901 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.550436 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.612618 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-plgd8"] Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.612976 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.613772 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-v58rm"] Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.614395 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.615263 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.615673 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.615692 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.615899 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.615990 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.616274 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gsndg"] Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.616283 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.616464 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.617176 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.617817 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.618892 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.618927 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.619063 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.619067 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.623927 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 20:57:18.753911606 +0000 UTC Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.628910 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.649804 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.658822 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669666 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/900f07ef-9762-49ec-9551-41a6ce12659d-multus-daemon-config\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669707 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-var-lib-cni-multus\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669729 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-multus-conf-dir\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669758 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77k74\" (UniqueName: \"kubernetes.io/projected/7622a560-9120-4202-b95a-246a806fe889-kube-api-access-77k74\") pod \"machine-config-daemon-plgd8\" (UID: \"7622a560-9120-4202-b95a-246a806fe889\") " pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669774 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-run-netns\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669795 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-multus-socket-dir-parent\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669812 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ed585257-5535-4eb9-9a7c-81081bdae051-tuning-conf-dir\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669828 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl2s7\" (UniqueName: \"kubernetes.io/projected/ed585257-5535-4eb9-9a7c-81081bdae051-kube-api-access-rl2s7\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669851 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-hostroot\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669866 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-run-multus-certs\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669890 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-var-lib-cni-bin\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669904 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-run-k8s-cni-cncf-io\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669919 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ed585257-5535-4eb9-9a7c-81081bdae051-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669934 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-etc-kubernetes\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669970 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ed585257-5535-4eb9-9a7c-81081bdae051-os-release\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.669985 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/900f07ef-9762-49ec-9551-41a6ce12659d-cni-binary-copy\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.670000 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7622a560-9120-4202-b95a-246a806fe889-rootfs\") pod \"machine-config-daemon-plgd8\" (UID: \"7622a560-9120-4202-b95a-246a806fe889\") " pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.670015 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7622a560-9120-4202-b95a-246a806fe889-mcd-auth-proxy-config\") pod \"machine-config-daemon-plgd8\" (UID: \"7622a560-9120-4202-b95a-246a806fe889\") " pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.670044 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-cnibin\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.670063 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6htjb\" (UniqueName: \"kubernetes.io/projected/900f07ef-9762-49ec-9551-41a6ce12659d-kube-api-access-6htjb\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.670078 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ed585257-5535-4eb9-9a7c-81081bdae051-system-cni-dir\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.670092 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ed585257-5535-4eb9-9a7c-81081bdae051-cnibin\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.670107 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7622a560-9120-4202-b95a-246a806fe889-proxy-tls\") pod \"machine-config-daemon-plgd8\" (UID: \"7622a560-9120-4202-b95a-246a806fe889\") " pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.670124 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ed585257-5535-4eb9-9a7c-81081bdae051-cni-binary-copy\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.670139 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-system-cni-dir\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.670156 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-multus-cni-dir\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.670170 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-os-release\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.670185 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-var-lib-kubelet\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.671770 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.679279 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.689611 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.706451 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.718240 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.730778 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.740297 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.751305 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.762929 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.771860 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-var-lib-kubelet\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.771918 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/900f07ef-9762-49ec-9551-41a6ce12659d-multus-daemon-config\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.771997 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-var-lib-cni-multus\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772012 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-var-lib-kubelet\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772030 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-multus-conf-dir\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772095 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-multus-conf-dir\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772104 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77k74\" (UniqueName: \"kubernetes.io/projected/7622a560-9120-4202-b95a-246a806fe889-kube-api-access-77k74\") pod \"machine-config-daemon-plgd8\" (UID: \"7622a560-9120-4202-b95a-246a806fe889\") " pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772133 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-run-netns\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772200 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-multus-socket-dir-parent\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772221 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ed585257-5535-4eb9-9a7c-81081bdae051-tuning-conf-dir\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772242 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl2s7\" (UniqueName: \"kubernetes.io/projected/ed585257-5535-4eb9-9a7c-81081bdae051-kube-api-access-rl2s7\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772295 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-hostroot\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772324 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-run-multus-certs\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772371 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-var-lib-cni-bin\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772427 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-run-k8s-cni-cncf-io\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772447 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ed585257-5535-4eb9-9a7c-81081bdae051-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772473 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-etc-kubernetes\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772507 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ed585257-5535-4eb9-9a7c-81081bdae051-os-release\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772529 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/900f07ef-9762-49ec-9551-41a6ce12659d-cni-binary-copy\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772553 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7622a560-9120-4202-b95a-246a806fe889-rootfs\") pod \"machine-config-daemon-plgd8\" (UID: \"7622a560-9120-4202-b95a-246a806fe889\") " pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772572 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7622a560-9120-4202-b95a-246a806fe889-mcd-auth-proxy-config\") pod \"machine-config-daemon-plgd8\" (UID: \"7622a560-9120-4202-b95a-246a806fe889\") " pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772590 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-cnibin\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772610 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6htjb\" (UniqueName: \"kubernetes.io/projected/900f07ef-9762-49ec-9551-41a6ce12659d-kube-api-access-6htjb\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772629 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ed585257-5535-4eb9-9a7c-81081bdae051-system-cni-dir\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772649 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ed585257-5535-4eb9-9a7c-81081bdae051-cnibin\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772670 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7622a560-9120-4202-b95a-246a806fe889-proxy-tls\") pod \"machine-config-daemon-plgd8\" (UID: \"7622a560-9120-4202-b95a-246a806fe889\") " pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772694 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ed585257-5535-4eb9-9a7c-81081bdae051-cni-binary-copy\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772712 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-system-cni-dir\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772730 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-multus-cni-dir\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.772751 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-os-release\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.773037 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-os-release\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.773114 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/900f07ef-9762-49ec-9551-41a6ce12659d-multus-daemon-config\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.773183 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-var-lib-cni-multus\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.773236 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-hostroot\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.773275 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-run-netns\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.773333 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-multus-socket-dir-parent\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.773390 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ed585257-5535-4eb9-9a7c-81081bdae051-os-release\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.773520 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-run-multus-certs\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.773792 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-var-lib-cni-bin\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.773848 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-host-run-k8s-cni-cncf-io\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.774097 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/900f07ef-9762-49ec-9551-41a6ce12659d-cni-binary-copy\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.774149 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7622a560-9120-4202-b95a-246a806fe889-rootfs\") pod \"machine-config-daemon-plgd8\" (UID: \"7622a560-9120-4202-b95a-246a806fe889\") " pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.774600 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ed585257-5535-4eb9-9a7c-81081bdae051-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.774838 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-etc-kubernetes\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.774864 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ed585257-5535-4eb9-9a7c-81081bdae051-cnibin\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.774893 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ed585257-5535-4eb9-9a7c-81081bdae051-system-cni-dir\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.774993 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7622a560-9120-4202-b95a-246a806fe889-mcd-auth-proxy-config\") pod \"machine-config-daemon-plgd8\" (UID: \"7622a560-9120-4202-b95a-246a806fe889\") " pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.775120 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-cnibin\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.775301 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-system-cni-dir\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.775363 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/900f07ef-9762-49ec-9551-41a6ce12659d-multus-cni-dir\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.775375 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ed585257-5535-4eb9-9a7c-81081bdae051-cni-binary-copy\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.776120 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ed585257-5535-4eb9-9a7c-81081bdae051-tuning-conf-dir\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.784411 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.806349 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.816015 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7622a560-9120-4202-b95a-246a806fe889-proxy-tls\") pod \"machine-config-daemon-plgd8\" (UID: \"7622a560-9120-4202-b95a-246a806fe889\") " pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.816138 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6htjb\" (UniqueName: \"kubernetes.io/projected/900f07ef-9762-49ec-9551-41a6ce12659d-kube-api-access-6htjb\") pod \"multus-gsndg\" (UID: \"900f07ef-9762-49ec-9551-41a6ce12659d\") " pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.816560 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl2s7\" (UniqueName: \"kubernetes.io/projected/ed585257-5535-4eb9-9a7c-81081bdae051-kube-api-access-rl2s7\") pod \"multus-additional-cni-plugins-v58rm\" (UID: \"ed585257-5535-4eb9-9a7c-81081bdae051\") " pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.818440 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.821800 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77k74\" (UniqueName: \"kubernetes.io/projected/7622a560-9120-4202-b95a-246a806fe889-kube-api-access-77k74\") pod \"machine-config-daemon-plgd8\" (UID: \"7622a560-9120-4202-b95a-246a806fe889\") " pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.828306 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.842714 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.855638 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d"} Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.855701 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7"} Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.855719 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8197c76e23d1dfdc45f2a5e165361fa15f9fe8fec7f2ed00e5c0f181e3063c91"} Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.856436 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.857134 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4"} Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.857187 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5916d838773059b25a5bb4836e1283a851712bd6ef9c4f79174ade77241c604a"} Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.868022 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.869840 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027"} Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.870686 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.877347 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rwggg" event={"ID":"805246ab-eb54-4142-bdc1-cd658cfb3615","Type":"ContainerStarted","Data":"7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a"} Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.877386 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rwggg" event={"ID":"805246ab-eb54-4142-bdc1-cd658cfb3615","Type":"ContainerStarted","Data":"c5b3e41ccc6e4a55161a08d7ef419cd3a2b852f4637287f82dc0010c4bfcd399"} Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.879191 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lbr58" event={"ID":"d7c446ed-2321-4ed4-a768-17e71bc811ef","Type":"ContainerStarted","Data":"954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f"} Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.879225 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lbr58" event={"ID":"d7c446ed-2321-4ed4-a768-17e71bc811ef","Type":"ContainerStarted","Data":"65e14250e275e00b9228709cc76dcf4f4518e99b987b39445c41141a4ab94c6f"} Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.879894 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.880269 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"53e313cf72e9f2cbf18d65f1bd222e2fdd737b90a6d7c1e5bfaf132ba82592c4"} Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.892384 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.910880 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.924790 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.935327 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-v58rm" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.945684 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gsndg" Feb 21 00:06:59 crc kubenswrapper[4730]: W0221 00:06:59.957564 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod900f07ef_9762_49ec_9551_41a6ce12659d.slice/crio-68a5b1698bee1576b7dfaa85cd8929cb9aaeff00aa8faf78ca38f0099c613b31 WatchSource:0}: Error finding container 68a5b1698bee1576b7dfaa85cd8929cb9aaeff00aa8faf78ca38f0099c613b31: Status 404 returned error can't find the container with id 68a5b1698bee1576b7dfaa85cd8929cb9aaeff00aa8faf78ca38f0099c613b31 Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.964929 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.978319 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kp9wk"] Feb 21 00:06:59 crc kubenswrapper[4730]: I0221 00:06:59.984197 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.005147 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:06:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.010311 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.028815 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.046296 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.066389 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.079806 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-systemd\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.079847 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-run-ovn-kubernetes\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.079869 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.079887 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-etc-openvswitch\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.079901 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-ovnkube-config\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.079918 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-systemd-units\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084119 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-node-log\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084170 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6272ef5-e657-4f64-a217-305dddfe36cd-ovn-node-metrics-cert\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084202 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-openvswitch\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084229 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-run-netns\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084253 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-cni-bin\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084274 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-ovnkube-script-lib\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084298 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-slash\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084319 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-var-lib-openvswitch\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084340 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-log-socket\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084366 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-ovn\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084391 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzw4b\" (UniqueName: \"kubernetes.io/projected/c6272ef5-e657-4f64-a217-305dddfe36cd-kube-api-access-wzw4b\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084419 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-env-overrides\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084444 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-cni-netd\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084469 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-kubelet\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.084765 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.104018 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.123912 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.173789 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185094 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-openvswitch\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185132 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-run-netns\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185152 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-slash\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185169 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-var-lib-openvswitch\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185187 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-cni-bin\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185201 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-ovnkube-script-lib\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185217 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-ovn\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185218 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-run-netns\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185236 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-slash\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185270 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-openvswitch\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185230 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-log-socket\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185298 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-cni-bin\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185324 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-ovn\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185338 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzw4b\" (UniqueName: \"kubernetes.io/projected/c6272ef5-e657-4f64-a217-305dddfe36cd-kube-api-access-wzw4b\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185368 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-cni-netd\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185383 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-env-overrides\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185399 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-kubelet\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185456 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-systemd\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185479 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-run-ovn-kubernetes\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185496 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185515 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-etc-openvswitch\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185531 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-ovnkube-config\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185551 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-systemd-units\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185568 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-node-log\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185584 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6272ef5-e657-4f64-a217-305dddfe36cd-ovn-node-metrics-cert\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185798 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-cni-netd\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185295 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-log-socket\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185876 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185913 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-kubelet\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185991 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-run-ovn-kubernetes\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185937 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-systemd\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.186028 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-systemd-units\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.186005 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-ovnkube-script-lib\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.185324 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-var-lib-openvswitch\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.186052 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-etc-openvswitch\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.186073 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-node-log\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.187214 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-env-overrides\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.187511 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-ovnkube-config\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.189485 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6272ef5-e657-4f64-a217-305dddfe36cd-ovn-node-metrics-cert\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.212344 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.241672 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzw4b\" (UniqueName: \"kubernetes.io/projected/c6272ef5-e657-4f64-a217-305dddfe36cd-kube-api-access-wzw4b\") pod \"ovnkube-node-kp9wk\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.274027 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.286807 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.287056 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:07:02.287024363 +0000 UTC m=+24.298591298 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.287120 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.287164 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.287233 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.287271 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.287296 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.287378 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:02.287353731 +0000 UTC m=+24.298920666 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.287405 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.287444 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.287441 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.287511 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:02.287494394 +0000 UTC m=+24.299061329 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.287415 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.287548 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.287559 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.287583 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:02.287576595 +0000 UTC m=+24.299143530 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.287456 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.287706 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:02.287606086 +0000 UTC m=+24.299173021 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.311628 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.334460 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.357190 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.392686 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.431075 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.474912 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.507095 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.510464 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.514725 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: W0221 00:07:00.518359 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6272ef5_e657_4f64_a217_305dddfe36cd.slice/crio-7a9a898b68a8e76634ab931e756163c6ffc6a9713cedb2b2db777b0abb8b602e WatchSource:0}: Error finding container 7a9a898b68a8e76634ab931e756163c6ffc6a9713cedb2b2db777b0abb8b602e: Status 404 returned error can't find the container with id 7a9a898b68a8e76634ab931e756163c6ffc6a9713cedb2b2db777b0abb8b602e Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.533760 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.572076 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.616091 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.624049 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 15:35:19.388227995 +0000 UTC Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.657363 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.694646 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.694764 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.694831 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.694859 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.694890 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.694919 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.700782 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.730110 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.772251 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.820825 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.853763 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.884881 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gsndg" event={"ID":"900f07ef-9762-49ec-9551-41a6ce12659d","Type":"ContainerStarted","Data":"8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c"} Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.884933 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gsndg" event={"ID":"900f07ef-9762-49ec-9551-41a6ce12659d","Type":"ContainerStarted","Data":"68a5b1698bee1576b7dfaa85cd8929cb9aaeff00aa8faf78ca38f0099c613b31"} Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.886728 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerStarted","Data":"06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d"} Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.886758 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerStarted","Data":"6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477"} Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.886771 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerStarted","Data":"aee647fdef60ef01c1f925442f24155ab898e120e85ea2d0606925260c3e7d18"} Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.888488 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerID="e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5" exitCode=0 Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.888554 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5"} Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.888598 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerStarted","Data":"7a9a898b68a8e76634ab931e756163c6ffc6a9713cedb2b2db777b0abb8b602e"} Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.890317 4730 generic.go:334] "Generic (PLEG): container finished" podID="ed585257-5535-4eb9-9a7c-81081bdae051" containerID="dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852" exitCode=0 Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.890378 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" event={"ID":"ed585257-5535-4eb9-9a7c-81081bdae051","Type":"ContainerDied","Data":"dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852"} Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.890400 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" event={"ID":"ed585257-5535-4eb9-9a7c-81081bdae051","Type":"ContainerStarted","Data":"3a1e48e59025a46949a0d7cb13c1e402c176e7a69c2d5f04f7dc8e2663d02dac"} Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.906845 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:00 crc kubenswrapper[4730]: E0221 00:07:00.927340 4730 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:07:00 crc kubenswrapper[4730]: I0221 00:07:00.970347 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.008852 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.030784 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.072512 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.111125 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.155986 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.195000 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.232879 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.271569 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.313983 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.352965 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.392458 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.434793 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.472492 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.517376 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.554492 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.596678 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.624256 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:18:58.127574738 +0000 UTC Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.638303 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.673031 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.712483 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.755549 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.793102 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.831409 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.876108 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.898687 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerStarted","Data":"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163"} Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.899033 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerStarted","Data":"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c"} Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.899045 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerStarted","Data":"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7"} Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.901124 4730 generic.go:334] "Generic (PLEG): container finished" podID="ed585257-5535-4eb9-9a7c-81081bdae051" containerID="3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf" exitCode=0 Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.901222 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" event={"ID":"ed585257-5535-4eb9-9a7c-81081bdae051","Type":"ContainerDied","Data":"3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf"} Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.903469 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d"} Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.922131 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.967107 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:01 crc kubenswrapper[4730]: I0221 00:07:01.992165 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.034628 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.072559 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.113012 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.149738 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.194096 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.233190 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.273055 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.305726 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.305881 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:07:06.305859451 +0000 UTC m=+28.317426386 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.306261 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.306286 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.306316 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.306335 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.306347 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.306387 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:06.306379433 +0000 UTC m=+28.317946368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.306412 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.306424 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.306434 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.306436 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.306447 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.306454 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.306462 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:06.306453484 +0000 UTC m=+28.318020419 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.306479 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:06.306472195 +0000 UTC m=+28.318039120 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.306493 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.306516 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:06.306510795 +0000 UTC m=+28.318077730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.315686 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.351325 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.394224 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.440897 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.472409 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.518757 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.559369 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.596099 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.624856 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:43:47.315290275 +0000 UTC Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.692241 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.692273 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.692284 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.692414 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.692498 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:02 crc kubenswrapper[4730]: E0221 00:07:02.692560 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.913048 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerStarted","Data":"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127"} Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.913105 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerStarted","Data":"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b"} Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.913127 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerStarted","Data":"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7"} Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.916105 4730 generic.go:334] "Generic (PLEG): container finished" podID="ed585257-5535-4eb9-9a7c-81081bdae051" containerID="860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123" exitCode=0 Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.916141 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" event={"ID":"ed585257-5535-4eb9-9a7c-81081bdae051","Type":"ContainerDied","Data":"860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123"} Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.931862 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.949287 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.960752 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.977092 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:02 crc kubenswrapper[4730]: I0221 00:07:02.993143 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.008005 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.021665 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.039287 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.060166 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.072188 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.084384 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.096255 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.115924 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.162962 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.202885 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.625782 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:06:18.800456651 +0000 UTC Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.921822 4730 generic.go:334] "Generic (PLEG): container finished" podID="ed585257-5535-4eb9-9a7c-81081bdae051" containerID="b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a" exitCode=0 Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.921863 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" event={"ID":"ed585257-5535-4eb9-9a7c-81081bdae051","Type":"ContainerDied","Data":"b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a"} Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.938212 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.961554 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:03 crc kubenswrapper[4730]: I0221 00:07:03.978142 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.000266 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.015889 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.029844 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.051073 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.074890 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.096725 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.110192 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.126819 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.139193 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.155189 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.169513 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.189688 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.237560 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.245350 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.245459 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.245489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.245646 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.252999 4730 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.253483 4730 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.255722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.255783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.255803 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.255828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.255848 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:04Z","lastTransitionTime":"2026-02-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:04 crc kubenswrapper[4730]: E0221 00:07:04.269837 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.274307 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.274432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.274514 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.274602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.274696 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:04Z","lastTransitionTime":"2026-02-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:04 crc kubenswrapper[4730]: E0221 00:07:04.296160 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.300800 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.300846 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.300859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.300879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.300892 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:04Z","lastTransitionTime":"2026-02-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:04 crc kubenswrapper[4730]: E0221 00:07:04.312071 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.316698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.316768 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.316788 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.316816 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.316835 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:04Z","lastTransitionTime":"2026-02-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:04 crc kubenswrapper[4730]: E0221 00:07:04.337171 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.342038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.342089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.342103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.342129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.342144 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:04Z","lastTransitionTime":"2026-02-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:04 crc kubenswrapper[4730]: E0221 00:07:04.354788 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: E0221 00:07:04.355000 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.356897 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.356963 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.356977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.356997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.357009 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:04Z","lastTransitionTime":"2026-02-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.461612 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.461674 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.461695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.461718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.461735 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:04Z","lastTransitionTime":"2026-02-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.565268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.565316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.565328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.565346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.565360 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:04Z","lastTransitionTime":"2026-02-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.626423 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:16:07.716746081 +0000 UTC Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.668383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.668431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.668444 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.668462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.668475 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:04Z","lastTransitionTime":"2026-02-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.692709 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.692764 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:04 crc kubenswrapper[4730]: E0221 00:07:04.692865 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.693027 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:04 crc kubenswrapper[4730]: E0221 00:07:04.693139 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:04 crc kubenswrapper[4730]: E0221 00:07:04.693211 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.771599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.771695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.771714 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.772094 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.772315 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:04Z","lastTransitionTime":"2026-02-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.875611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.875709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.875727 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.875782 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.875805 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:04Z","lastTransitionTime":"2026-02-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.931444 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerStarted","Data":"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088"} Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.936403 4730 generic.go:334] "Generic (PLEG): container finished" podID="ed585257-5535-4eb9-9a7c-81081bdae051" containerID="c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548" exitCode=0 Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.936461 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" event={"ID":"ed585257-5535-4eb9-9a7c-81081bdae051","Type":"ContainerDied","Data":"c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548"} Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.963727 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.980066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.980134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.980157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.980191 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.980213 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:04Z","lastTransitionTime":"2026-02-21T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:04 crc kubenswrapper[4730]: I0221 00:07:04.989435 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.004163 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.021267 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.032559 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.045712 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.061272 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.080091 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.083196 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.083234 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.083246 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.083275 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.083287 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:05Z","lastTransitionTime":"2026-02-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.097898 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.112525 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.144040 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.158529 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.174998 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.186412 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.186462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.186474 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.186495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.186508 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:05Z","lastTransitionTime":"2026-02-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.196704 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.228369 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.289700 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.289755 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.289770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.289795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.289810 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:05Z","lastTransitionTime":"2026-02-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.392875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.392926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.392936 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.392971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.392984 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:05Z","lastTransitionTime":"2026-02-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.496137 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.496169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.496177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.496190 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.496198 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:05Z","lastTransitionTime":"2026-02-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.599437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.599480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.599489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.599507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.599517 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:05Z","lastTransitionTime":"2026-02-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.626877 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:06:49.932637874 +0000 UTC Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.722227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.722281 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.722298 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.722321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.722338 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:05Z","lastTransitionTime":"2026-02-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.824819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.824855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.824864 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.824876 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.824884 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:05Z","lastTransitionTime":"2026-02-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.927094 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.927166 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.927184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.927209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.927228 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:05Z","lastTransitionTime":"2026-02-21T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.943293 4730 generic.go:334] "Generic (PLEG): container finished" podID="ed585257-5535-4eb9-9a7c-81081bdae051" containerID="1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2" exitCode=0 Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.943338 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" event={"ID":"ed585257-5535-4eb9-9a7c-81081bdae051","Type":"ContainerDied","Data":"1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2"} Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.963599 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:05 crc kubenswrapper[4730]: I0221 00:07:05.992104 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.015142 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.026314 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.030807 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.030848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.030863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.030902 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.030912 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:06Z","lastTransitionTime":"2026-02-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.038633 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.050306 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.063403 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.077913 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.092239 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.106180 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.118501 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.130679 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.133743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.133795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.133812 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.133832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.133849 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:06Z","lastTransitionTime":"2026-02-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.146422 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.163801 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.178102 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.236103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.236144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.236154 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.236169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.236179 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:06Z","lastTransitionTime":"2026-02-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.303424 4730 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.339125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.339187 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.339204 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.339227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.339245 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:06Z","lastTransitionTime":"2026-02-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.350773 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.351020 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.351035 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:07:14.351003535 +0000 UTC m=+36.362570510 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.351160 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.351397 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:14.351363904 +0000 UTC m=+36.362930879 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.351498 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.351527 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.351546 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.351610 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:14.351593879 +0000 UTC m=+36.363160854 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.351244 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.351874 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.351909 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.352026 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.352039 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.352047 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.352076 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:14.352067798 +0000 UTC m=+36.363634733 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.352100 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.352203 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:14.35216353 +0000 UTC m=+36.363730505 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.442151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.442205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.442223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.442247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.442265 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:06Z","lastTransitionTime":"2026-02-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.545207 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.545241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.545249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.545262 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.545274 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:06Z","lastTransitionTime":"2026-02-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.627447 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:07:42.077456815 +0000 UTC Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.647868 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.647913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.647923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.647936 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.647969 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:06Z","lastTransitionTime":"2026-02-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.693158 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.693154 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.693269 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.693462 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.693584 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:06 crc kubenswrapper[4730]: E0221 00:07:06.693653 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.751115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.751151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.751160 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.751174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.751185 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:06Z","lastTransitionTime":"2026-02-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.800174 4730 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.854363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.854432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.854451 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.854477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.854498 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:06Z","lastTransitionTime":"2026-02-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.952844 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerStarted","Data":"23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949"} Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.953510 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.959704 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.959726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.959735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.959747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.959757 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:06Z","lastTransitionTime":"2026-02-21T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.959823 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" event={"ID":"ed585257-5535-4eb9-9a7c-81081bdae051","Type":"ContainerStarted","Data":"4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0"} Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.977553 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.987481 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:06 crc kubenswrapper[4730]: I0221 00:07:06.995555 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.030507 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.041312 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.056076 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.061703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.061736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.061747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.061762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.061772 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:07Z","lastTransitionTime":"2026-02-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.071581 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.091122 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.108043 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.120751 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.133261 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.144360 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.156830 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.164078 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.164114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.164125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.164140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.164151 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:07Z","lastTransitionTime":"2026-02-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.172649 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.188529 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.203512 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.215025 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.228185 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.241479 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.254918 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.266537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.266560 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.266604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.266617 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.266626 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:07Z","lastTransitionTime":"2026-02-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.269375 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.286119 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.299528 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.330214 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.363156 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.369367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.369433 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.369458 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.369486 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.369507 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:07Z","lastTransitionTime":"2026-02-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.399937 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.418611 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.431524 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.441074 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.457000 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.471089 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.471507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.471539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.471550 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.471566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.471577 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:07Z","lastTransitionTime":"2026-02-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.574107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.574401 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.574513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.574616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.574703 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:07Z","lastTransitionTime":"2026-02-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.628424 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:52:11.455146241 +0000 UTC Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.678194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.678256 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.678272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.678296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.678313 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:07Z","lastTransitionTime":"2026-02-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.780614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.780677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.780694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.780722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.780743 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:07Z","lastTransitionTime":"2026-02-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.886831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.887463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.887636 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.887780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.887927 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:07Z","lastTransitionTime":"2026-02-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.963569 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.964256 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.989031 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.990513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.990553 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.990567 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.990584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:07 crc kubenswrapper[4730]: I0221 00:07:07.990597 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:07Z","lastTransitionTime":"2026-02-21T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.017127 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.030091 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.042401 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.058920 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.082447 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.093264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.093294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.093303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.093317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.093327 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:08Z","lastTransitionTime":"2026-02-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.103112 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.122330 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.142574 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.160738 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.177287 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.197141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.197214 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.197252 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.197279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.197299 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:08Z","lastTransitionTime":"2026-02-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.199922 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.220635 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.242219 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.268048 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.290488 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.300619 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.300689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.300708 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.300734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.300754 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:08Z","lastTransitionTime":"2026-02-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.325146 4730 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.404158 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.404260 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.404276 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.404333 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.404356 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:08Z","lastTransitionTime":"2026-02-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.507309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.507368 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.507386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.507411 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.507427 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:08Z","lastTransitionTime":"2026-02-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.613738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.613775 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.613788 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.613807 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.613820 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:08Z","lastTransitionTime":"2026-02-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.630277 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:35:03.826518273 +0000 UTC Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.695936 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:08 crc kubenswrapper[4730]: E0221 00:07:08.696280 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.696339 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.696516 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:08 crc kubenswrapper[4730]: E0221 00:07:08.696576 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:08 crc kubenswrapper[4730]: E0221 00:07:08.696755 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.712694 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.718638 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.718678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.718694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.718713 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.718727 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:08Z","lastTransitionTime":"2026-02-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.731853 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.747677 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.764628 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.777336 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.799234 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.822075 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.830643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.830681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.830692 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.830711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.830721 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:08Z","lastTransitionTime":"2026-02-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.845448 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.868410 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.889889 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.911552 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.933324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.933352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.933360 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.933373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.933382 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:08Z","lastTransitionTime":"2026-02-21T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.967171 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.967195 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:08 crc kubenswrapper[4730]: I0221 00:07:08.986425 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.002207 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.020358 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.036640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.036709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.036723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.036745 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.036759 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:09Z","lastTransitionTime":"2026-02-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.139653 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.139702 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.139716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.139741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.139760 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:09Z","lastTransitionTime":"2026-02-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.243026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.243087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.243111 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.243136 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.243156 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:09Z","lastTransitionTime":"2026-02-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.346304 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.346387 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.346406 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.346438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.346456 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:09Z","lastTransitionTime":"2026-02-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.450107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.450179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.450195 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.450223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.450240 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:09Z","lastTransitionTime":"2026-02-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.553500 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.553552 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.553563 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.553583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.553596 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:09Z","lastTransitionTime":"2026-02-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.630972 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 21:20:12.340047244 +0000 UTC Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.657080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.657138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.657154 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.657179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.657194 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:09Z","lastTransitionTime":"2026-02-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.760556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.760640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.760656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.760679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.760697 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:09Z","lastTransitionTime":"2026-02-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.864145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.864210 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.864224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.864252 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.864268 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:09Z","lastTransitionTime":"2026-02-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.967492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.967535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.967547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.967565 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.967603 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:09Z","lastTransitionTime":"2026-02-21T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.974301 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/0.log" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.979822 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerID="23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949" exitCode=1 Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.979883 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949"} Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.981052 4730 scope.go:117] "RemoveContainer" containerID="23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949" Feb 21 00:07:09 crc kubenswrapper[4730]: I0221 00:07:09.997900 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.028864 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.063579 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:09Z\\\",\\\"message\\\":\\\"er 4 for removal\\\\nI0221 00:07:09.561115 6037 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 00:07:09.561122 6037 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 00:07:09.561143 6037 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 00:07:09.561154 6037 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:07:09.561150 6037 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 00:07:09.561189 6037 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 00:07:09.561278 6037 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 00:07:09.561296 6037 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 00:07:09.561306 6037 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:07:09.561329 6037 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0221 00:07:09.561330 6037 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:07:09.561365 6037 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 00:07:09.561382 6037 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0221 00:07:09.561385 6037 factory.go:656] Stopping watch factory\\\\nI0221 00:07:09.561399 6037 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.071383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.071883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.071903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.071933 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.071978 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:10Z","lastTransitionTime":"2026-02-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.100823 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.119186 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.138556 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.152477 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.170355 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.175075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.175129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.175148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.175175 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.175194 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:10Z","lastTransitionTime":"2026-02-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.189995 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.214488 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.237779 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.257640 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.276654 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.277743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.277813 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.277832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.277858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.277876 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:10Z","lastTransitionTime":"2026-02-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.295530 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.315143 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.380448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.380517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.380535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.380560 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.380579 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:10Z","lastTransitionTime":"2026-02-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.484441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.484515 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.484539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.484572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.484593 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:10Z","lastTransitionTime":"2026-02-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.587801 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.587849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.587860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.587877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.587888 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:10Z","lastTransitionTime":"2026-02-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.632005 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:29:03.244520976 +0000 UTC Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.690077 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.690114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.690126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.690141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.690150 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:10Z","lastTransitionTime":"2026-02-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.692532 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.692642 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:10 crc kubenswrapper[4730]: E0221 00:07:10.692812 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.692904 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:10 crc kubenswrapper[4730]: E0221 00:07:10.693096 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:10 crc kubenswrapper[4730]: E0221 00:07:10.693269 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.793293 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.793355 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.793374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.793399 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.793417 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:10Z","lastTransitionTime":"2026-02-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.895814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.895858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.895870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.895887 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.895900 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:10Z","lastTransitionTime":"2026-02-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.986247 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/0.log" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.989582 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerStarted","Data":"49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a"} Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.989723 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.998194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.998255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.998275 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.998301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:10 crc kubenswrapper[4730]: I0221 00:07:10.998320 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:10Z","lastTransitionTime":"2026-02-21T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.008395 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.054023 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.075998 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.095223 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.100553 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.100588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.100599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.100617 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.100632 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:11Z","lastTransitionTime":"2026-02-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.114238 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.129758 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.149480 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.163589 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.190171 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.204134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.204170 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.204186 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.204207 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.204221 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:11Z","lastTransitionTime":"2026-02-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.210748 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.232881 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.250980 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.274428 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:09Z\\\",\\\"message\\\":\\\"er 4 for removal\\\\nI0221 00:07:09.561115 6037 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 00:07:09.561122 6037 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 00:07:09.561143 6037 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 00:07:09.561154 6037 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:07:09.561150 6037 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 00:07:09.561189 6037 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 00:07:09.561278 6037 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 00:07:09.561296 6037 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 00:07:09.561306 6037 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:07:09.561329 6037 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0221 00:07:09.561330 6037 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:07:09.561365 6037 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 00:07:09.561382 6037 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0221 00:07:09.561385 6037 factory.go:656] Stopping watch factory\\\\nI0221 00:07:09.561399 6037 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.299634 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.307001 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.307045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.307061 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.307084 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.307102 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:11Z","lastTransitionTime":"2026-02-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.316865 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.410378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.410441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.410457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.410481 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.410499 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:11Z","lastTransitionTime":"2026-02-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.514328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.514415 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.514440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.514478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.514509 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:11Z","lastTransitionTime":"2026-02-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.618457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.618521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.618538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.618561 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.618579 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:11Z","lastTransitionTime":"2026-02-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.632179 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:32:53.900393275 +0000 UTC Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.721686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.721794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.721814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.721843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.721869 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:11Z","lastTransitionTime":"2026-02-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.825401 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.825511 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.825539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.825577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.825598 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:11Z","lastTransitionTime":"2026-02-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.928523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.928590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.928611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.928641 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.928665 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:11Z","lastTransitionTime":"2026-02-21T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.981024 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk"] Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.981871 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.984615 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.984760 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.996635 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/1.log" Feb 21 00:07:11 crc kubenswrapper[4730]: I0221 00:07:11.997548 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/0.log" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.003506 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerID="49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a" exitCode=1 Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.003568 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a"} Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.003624 4730 scope.go:117] "RemoveContainer" containerID="23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.004986 4730 scope.go:117] "RemoveContainer" containerID="49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a" Feb 21 00:07:12 crc kubenswrapper[4730]: E0221 00:07:12.005334 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.027266 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.032838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.032881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.032897 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.032921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.032939 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:12Z","lastTransitionTime":"2026-02-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.046508 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.065492 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.094151 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.119854 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7745d1e7-d559-4eb3-97cf-870e01ade14d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zqwvk\" (UID: \"7745d1e7-d559-4eb3-97cf-870e01ade14d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.120009 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qjjf\" (UniqueName: \"kubernetes.io/projected/7745d1e7-d559-4eb3-97cf-870e01ade14d-kube-api-access-8qjjf\") pod \"ovnkube-control-plane-749d76644c-zqwvk\" (UID: \"7745d1e7-d559-4eb3-97cf-870e01ade14d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.120174 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7745d1e7-d559-4eb3-97cf-870e01ade14d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zqwvk\" (UID: \"7745d1e7-d559-4eb3-97cf-870e01ade14d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.120346 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7745d1e7-d559-4eb3-97cf-870e01ade14d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zqwvk\" (UID: \"7745d1e7-d559-4eb3-97cf-870e01ade14d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.127834 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:09Z\\\",\\\"message\\\":\\\"er 4 for removal\\\\nI0221 00:07:09.561115 6037 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 00:07:09.561122 6037 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 00:07:09.561143 6037 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 00:07:09.561154 6037 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:07:09.561150 6037 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 00:07:09.561189 6037 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 00:07:09.561278 6037 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 00:07:09.561296 6037 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 00:07:09.561306 6037 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:07:09.561329 6037 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0221 00:07:09.561330 6037 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:07:09.561365 6037 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 00:07:09.561382 6037 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0221 00:07:09.561385 6037 factory.go:656] Stopping watch factory\\\\nI0221 00:07:09.561399 6037 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.135696 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.135763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.135783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.135812 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.135833 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:12Z","lastTransitionTime":"2026-02-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.151520 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.174073 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.193653 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.215552 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.221288 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7745d1e7-d559-4eb3-97cf-870e01ade14d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zqwvk\" (UID: \"7745d1e7-d559-4eb3-97cf-870e01ade14d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.221378 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7745d1e7-d559-4eb3-97cf-870e01ade14d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zqwvk\" (UID: \"7745d1e7-d559-4eb3-97cf-870e01ade14d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.221445 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7745d1e7-d559-4eb3-97cf-870e01ade14d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zqwvk\" (UID: \"7745d1e7-d559-4eb3-97cf-870e01ade14d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.221526 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qjjf\" (UniqueName: \"kubernetes.io/projected/7745d1e7-d559-4eb3-97cf-870e01ade14d-kube-api-access-8qjjf\") pod \"ovnkube-control-plane-749d76644c-zqwvk\" (UID: \"7745d1e7-d559-4eb3-97cf-870e01ade14d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.222406 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7745d1e7-d559-4eb3-97cf-870e01ade14d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zqwvk\" (UID: \"7745d1e7-d559-4eb3-97cf-870e01ade14d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.223131 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7745d1e7-d559-4eb3-97cf-870e01ade14d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zqwvk\" (UID: \"7745d1e7-d559-4eb3-97cf-870e01ade14d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.229553 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.232902 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7745d1e7-d559-4eb3-97cf-870e01ade14d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zqwvk\" (UID: \"7745d1e7-d559-4eb3-97cf-870e01ade14d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.241733 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.241803 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.241828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.241860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.241884 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:12Z","lastTransitionTime":"2026-02-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.279181 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.287302 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qjjf\" (UniqueName: \"kubernetes.io/projected/7745d1e7-d559-4eb3-97cf-870e01ade14d-kube-api-access-8qjjf\") pod \"ovnkube-control-plane-749d76644c-zqwvk\" (UID: \"7745d1e7-d559-4eb3-97cf-870e01ade14d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.303642 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.308891 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.329411 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.352443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.352515 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.352535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.352561 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.352578 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:12Z","lastTransitionTime":"2026-02-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.357801 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.383616 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.401776 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.418160 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.441814 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.455575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.455611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.455625 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.455642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.455654 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:12Z","lastTransitionTime":"2026-02-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.461387 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.481318 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.497976 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.518571 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.536591 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.556418 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.558073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.558111 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.558123 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.558141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.558153 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:12Z","lastTransitionTime":"2026-02-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.602470 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.629248 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.632583 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:49:49.42142368 +0000 UTC Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.647921 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.660249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.660296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.660308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.660327 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.660339 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:12Z","lastTransitionTime":"2026-02-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.663111 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.678505 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23af75637ef0b12643312b272298194a13bd870433ced4ade3bafebf79e92949\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:09Z\\\",\\\"message\\\":\\\"er 4 for removal\\\\nI0221 00:07:09.561115 6037 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 00:07:09.561122 6037 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 00:07:09.561143 6037 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 00:07:09.561154 6037 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:07:09.561150 6037 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 00:07:09.561189 6037 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 00:07:09.561278 6037 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 00:07:09.561296 6037 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 00:07:09.561306 6037 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:07:09.561329 6037 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0221 00:07:09.561330 6037 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:07:09.561365 6037 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 00:07:09.561382 6037 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0221 00:07:09.561385 6037 factory.go:656] Stopping watch factory\\\\nI0221 00:07:09.561399 6037 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"Policy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:07:11.147333 6155 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0221 00:07:11.147381 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 00:07:11.147391 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 00:07:11.147408 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 00:07:11.147414 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 00:07:11.147430 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 00:07:11.147462 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:07:11.147496 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 00:07:11.147506 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 00:07:11.147521 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:07:11.147531 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:07:11.147547 6155 factory.go:656] Stopping watch factory\\\\nI0221 00:07:11.147552 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 00:07:11.147570 6155 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.692325 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:12 crc kubenswrapper[4730]: E0221 00:07:12.692486 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.692568 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.692656 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:12 crc kubenswrapper[4730]: E0221 00:07:12.692794 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:12 crc kubenswrapper[4730]: E0221 00:07:12.693043 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.697838 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.709162 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.720301 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.763380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.763407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.763417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.763434 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.763445 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:12Z","lastTransitionTime":"2026-02-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.866110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.866157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.866174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.866200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.866217 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:12Z","lastTransitionTime":"2026-02-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.969520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.969567 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.969590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.969622 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:12 crc kubenswrapper[4730]: I0221 00:07:12.969635 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:12Z","lastTransitionTime":"2026-02-21T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.013487 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/1.log" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.022022 4730 scope.go:117] "RemoveContainer" containerID="49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a" Feb 21 00:07:13 crc kubenswrapper[4730]: E0221 00:07:13.022402 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.027813 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" event={"ID":"7745d1e7-d559-4eb3-97cf-870e01ade14d","Type":"ContainerStarted","Data":"7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c"} Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.027895 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" event={"ID":"7745d1e7-d559-4eb3-97cf-870e01ade14d","Type":"ContainerStarted","Data":"189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a"} Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.027931 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" event={"ID":"7745d1e7-d559-4eb3-97cf-870e01ade14d","Type":"ContainerStarted","Data":"fb90b91870b55250398bad049bcccece0e6ee6716b84eec558c73f01d34bfdb9"} Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.041011 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.059476 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.073370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.073431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.073450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.073478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.073499 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:13Z","lastTransitionTime":"2026-02-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.083928 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"Policy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:07:11.147333 6155 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0221 00:07:11.147381 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 00:07:11.147391 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 00:07:11.147408 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 00:07:11.147414 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 00:07:11.147430 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 00:07:11.147462 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:07:11.147496 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 00:07:11.147506 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 00:07:11.147521 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:07:11.147531 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:07:11.147547 6155 factory.go:656] Stopping watch factory\\\\nI0221 00:07:11.147552 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 00:07:11.147570 6155 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.108526 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.122926 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.141847 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.159727 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.176782 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.176853 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.176871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.176896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.176918 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:13Z","lastTransitionTime":"2026-02-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.185363 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.209874 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.233914 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.256670 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.274808 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.279569 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.279627 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.279644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.279746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.279767 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:13Z","lastTransitionTime":"2026-02-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.291079 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.311917 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.332146 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.347962 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.366410 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.383005 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.383197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.383322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.383456 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.383574 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:13Z","lastTransitionTime":"2026-02-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.385658 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.407752 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.433488 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.458350 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.476108 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.478215 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-snhft"] Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.479100 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:13 crc kubenswrapper[4730]: E0221 00:07:13.479207 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.486517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.486571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.486583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.486604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.486617 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:13Z","lastTransitionTime":"2026-02-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.496133 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.519007 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.539266 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.539310 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqkbv\" (UniqueName: \"kubernetes.io/projected/bcf7c949-7646-4b97-9ffa-bf019455ed07-kube-api-access-dqkbv\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.541860 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"Policy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:07:11.147333 6155 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0221 00:07:11.147381 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 00:07:11.147391 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 00:07:11.147408 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 00:07:11.147414 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 00:07:11.147430 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 00:07:11.147462 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:07:11.147496 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 00:07:11.147506 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 00:07:11.147521 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:07:11.147531 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:07:11.147547 6155 factory.go:656] Stopping watch factory\\\\nI0221 00:07:11.147552 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 00:07:11.147570 6155 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.582077 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.588651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.588705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.588727 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.588753 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.588772 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:13Z","lastTransitionTime":"2026-02-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.594892 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.607348 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.618841 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.632779 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:39:24.218126071 +0000 UTC Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.640254 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.640780 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.640881 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqkbv\" (UniqueName: \"kubernetes.io/projected/bcf7c949-7646-4b97-9ffa-bf019455ed07-kube-api-access-dqkbv\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:13 crc kubenswrapper[4730]: E0221 00:07:13.641003 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:13 crc kubenswrapper[4730]: E0221 00:07:13.641095 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs podName:bcf7c949-7646-4b97-9ffa-bf019455ed07 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:14.141075483 +0000 UTC m=+36.152642438 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs") pod "network-metrics-daemon-snhft" (UID: "bcf7c949-7646-4b97-9ffa-bf019455ed07") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.662129 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.673837 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqkbv\" (UniqueName: \"kubernetes.io/projected/bcf7c949-7646-4b97-9ffa-bf019455ed07-kube-api-access-dqkbv\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.679812 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.693233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.693305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.693319 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.693345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.693362 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:13Z","lastTransitionTime":"2026-02-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.700993 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.715882 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.731225 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.744117 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.761743 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.793302 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"Policy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:07:11.147333 6155 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0221 00:07:11.147381 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 00:07:11.147391 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 00:07:11.147408 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 00:07:11.147414 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 00:07:11.147430 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 00:07:11.147462 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:07:11.147496 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 00:07:11.147506 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 00:07:11.147521 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:07:11.147531 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:07:11.147547 6155 factory.go:656] Stopping watch factory\\\\nI0221 00:07:11.147552 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 00:07:11.147570 6155 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.799810 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.799861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.799871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.800135 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.800167 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:13Z","lastTransitionTime":"2026-02-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.827593 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.844926 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.859544 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.874378 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.893727 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.903876 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.903933 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.903981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.904012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.904033 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:13Z","lastTransitionTime":"2026-02-21T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.911614 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.931113 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.952712 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.972582 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:13 crc kubenswrapper[4730]: I0221 00:07:13.988485 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.007560 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.007663 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.007682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.008325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.008372 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.009821 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.112807 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.112894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.112915 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.112975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.112997 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.146706 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.146984 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.147085 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs podName:bcf7c949-7646-4b97-9ffa-bf019455ed07 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:15.147055236 +0000 UTC m=+37.158622201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs") pod "network-metrics-daemon-snhft" (UID: "bcf7c949-7646-4b97-9ffa-bf019455ed07") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.216760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.216892 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.216975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.217003 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.217023 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.321601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.322280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.322299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.322328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.322345 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.427163 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.427218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.427235 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.427257 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.427273 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.450783 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.451005 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.451075 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.451138 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.451178 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.451378 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.451405 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.451424 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.451496 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:30.451471971 +0000 UTC m=+52.463038946 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.452117 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:07:30.452105154 +0000 UTC m=+52.463672089 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.452178 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.452189 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.452209 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:30.452200816 +0000 UTC m=+52.463767751 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.452217 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.452233 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.452251 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.452282 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:30.452266167 +0000 UTC m=+52.463833132 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.452306 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:30.452294968 +0000 UTC m=+52.463861933 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.531468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.531538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.531557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.531584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.531600 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.543638 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.543685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.543702 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.543720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.543735 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.561673 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.568468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.568541 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.568565 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.568595 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.568614 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.590459 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.596117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.596174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.596191 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.596224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.596244 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.613050 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.619359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.619411 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.619429 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.619466 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.619483 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.633541 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:42:10.363881534 +0000 UTC Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.647269 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.654533 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.654600 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.654618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.654645 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.654665 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.676783 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.677126 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.679801 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.679858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.679872 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.679889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.679902 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.692421 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.692567 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.692607 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.692656 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.692805 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:14 crc kubenswrapper[4730]: E0221 00:07:14.693132 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.782894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.782930 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.782954 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.782970 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.782982 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.887045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.887137 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.887164 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.887200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.887220 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.990571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.990656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.990672 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.990696 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:14 crc kubenswrapper[4730]: I0221 00:07:14.990716 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:14Z","lastTransitionTime":"2026-02-21T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.094506 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.094584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.094606 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.094636 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.094661 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:15Z","lastTransitionTime":"2026-02-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.160310 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:15 crc kubenswrapper[4730]: E0221 00:07:15.160663 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:15 crc kubenswrapper[4730]: E0221 00:07:15.160862 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs podName:bcf7c949-7646-4b97-9ffa-bf019455ed07 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:17.16080821 +0000 UTC m=+39.172375355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs") pod "network-metrics-daemon-snhft" (UID: "bcf7c949-7646-4b97-9ffa-bf019455ed07") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.198763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.198842 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.198869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.198904 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.198933 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:15Z","lastTransitionTime":"2026-02-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.212463 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.237365 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"Policy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:07:11.147333 6155 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0221 00:07:11.147381 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 00:07:11.147391 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 00:07:11.147408 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 00:07:11.147414 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 00:07:11.147430 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 00:07:11.147462 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:07:11.147496 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 00:07:11.147506 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 00:07:11.147521 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:07:11.147531 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:07:11.147547 6155 factory.go:656] Stopping watch factory\\\\nI0221 00:07:11.147552 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 00:07:11.147570 6155 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.267120 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.287684 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.302906 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.303027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.303055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.303089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.303115 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:15Z","lastTransitionTime":"2026-02-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.303147 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.324432 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.341113 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.358991 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.373416 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.390170 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.407457 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.407523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.407596 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.407615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.407640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.407654 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:15Z","lastTransitionTime":"2026-02-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.426578 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.448632 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.471373 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.494829 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.510902 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.511003 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.511030 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.511073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.511101 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:15Z","lastTransitionTime":"2026-02-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.516432 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.535822 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.554142 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.614889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.615028 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.615057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.615112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.615139 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:15Z","lastTransitionTime":"2026-02-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.634631 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:17:10.112535367 +0000 UTC Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.692554 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:15 crc kubenswrapper[4730]: E0221 00:07:15.692777 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.719126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.719199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.719216 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.719242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.719258 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:15Z","lastTransitionTime":"2026-02-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.821415 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.821477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.821492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.821514 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.821530 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:15Z","lastTransitionTime":"2026-02-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.930610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.930686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.930713 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.930748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:15 crc kubenswrapper[4730]: I0221 00:07:15.930778 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:15Z","lastTransitionTime":"2026-02-21T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.035206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.035287 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.035314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.035351 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.035374 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:16Z","lastTransitionTime":"2026-02-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.138508 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.138587 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.138606 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.138637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.138656 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:16Z","lastTransitionTime":"2026-02-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.242516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.242637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.242666 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.242711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.242739 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:16Z","lastTransitionTime":"2026-02-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.345891 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.345996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.346018 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.346050 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.346075 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:16Z","lastTransitionTime":"2026-02-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.449675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.449780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.449806 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.449850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.449877 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:16Z","lastTransitionTime":"2026-02-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.554325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.554414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.554440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.554485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.554512 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:16Z","lastTransitionTime":"2026-02-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.634837 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:10:47.564568608 +0000 UTC Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.657907 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.658016 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.658043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.658079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.658105 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:16Z","lastTransitionTime":"2026-02-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.692877 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.692877 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.693111 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:16 crc kubenswrapper[4730]: E0221 00:07:16.693171 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:16 crc kubenswrapper[4730]: E0221 00:07:16.693259 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:16 crc kubenswrapper[4730]: E0221 00:07:16.693411 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.761856 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.761937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.761982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.762015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.762039 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:16Z","lastTransitionTime":"2026-02-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.866398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.866462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.866487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.866537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.866565 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:16Z","lastTransitionTime":"2026-02-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.958737 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.960513 4730 scope.go:117] "RemoveContainer" containerID="49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a" Feb 21 00:07:16 crc kubenswrapper[4730]: E0221 00:07:16.960825 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.970651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.970723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.970741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.970770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:16 crc kubenswrapper[4730]: I0221 00:07:16.970795 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:16Z","lastTransitionTime":"2026-02-21T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.074427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.074496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.074517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.074553 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.074580 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:17Z","lastTransitionTime":"2026-02-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.178118 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.178202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.178223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.178254 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.178276 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:17Z","lastTransitionTime":"2026-02-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.183883 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:17 crc kubenswrapper[4730]: E0221 00:07:17.184180 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:17 crc kubenswrapper[4730]: E0221 00:07:17.184280 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs podName:bcf7c949-7646-4b97-9ffa-bf019455ed07 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:21.184243003 +0000 UTC m=+43.195809978 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs") pod "network-metrics-daemon-snhft" (UID: "bcf7c949-7646-4b97-9ffa-bf019455ed07") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.281734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.281814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.281833 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.281865 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.281885 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:17Z","lastTransitionTime":"2026-02-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.385142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.385221 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.385246 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.385283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.385308 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:17Z","lastTransitionTime":"2026-02-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.489017 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.489089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.489140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.489178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.489205 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:17Z","lastTransitionTime":"2026-02-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.593383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.593471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.593496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.593535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.593557 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:17Z","lastTransitionTime":"2026-02-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.635032 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:27:17.304658764 +0000 UTC Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.692936 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:17 crc kubenswrapper[4730]: E0221 00:07:17.693229 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.696589 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.696675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.696697 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.696721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.696745 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:17Z","lastTransitionTime":"2026-02-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.800013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.800096 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.800120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.800153 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.800176 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:17Z","lastTransitionTime":"2026-02-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.903334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.903407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.903432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.903465 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:17 crc kubenswrapper[4730]: I0221 00:07:17.903489 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:17Z","lastTransitionTime":"2026-02-21T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.006828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.006913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.006936 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.007007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.007034 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:18Z","lastTransitionTime":"2026-02-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.110131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.110181 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.110192 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.110212 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.110226 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:18Z","lastTransitionTime":"2026-02-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.214600 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.214708 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.214734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.214769 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.214796 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:18Z","lastTransitionTime":"2026-02-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.318734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.318821 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.318844 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.318879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.318904 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:18Z","lastTransitionTime":"2026-02-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.422434 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.422553 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.422574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.422611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.422631 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:18Z","lastTransitionTime":"2026-02-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.526370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.526445 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.526519 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.526572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.526594 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:18Z","lastTransitionTime":"2026-02-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.630922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.631045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.631066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.631102 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.631124 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:18Z","lastTransitionTime":"2026-02-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.635154 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:12:28.674270585 +0000 UTC Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.693161 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.693224 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:18 crc kubenswrapper[4730]: E0221 00:07:18.693435 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.693458 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:18 crc kubenswrapper[4730]: E0221 00:07:18.693691 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:18 crc kubenswrapper[4730]: E0221 00:07:18.693808 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.721765 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.734407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.734477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.734496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.734519 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.734537 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:18Z","lastTransitionTime":"2026-02-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.743541 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.761637 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.777413 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.812409 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.830206 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.837169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.837264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.837293 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.837335 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.837655 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:18Z","lastTransitionTime":"2026-02-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.850806 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.879686 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.916874 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"Policy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:07:11.147333 6155 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0221 00:07:11.147381 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 00:07:11.147391 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 00:07:11.147408 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 00:07:11.147414 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 00:07:11.147430 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 00:07:11.147462 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:07:11.147496 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 00:07:11.147506 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 00:07:11.147521 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:07:11.147531 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:07:11.147547 6155 factory.go:656] Stopping watch factory\\\\nI0221 00:07:11.147552 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 00:07:11.147570 6155 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.936427 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.942370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.942439 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.942456 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.942485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.942502 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:18Z","lastTransitionTime":"2026-02-21T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.953998 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.974356 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:18 crc kubenswrapper[4730]: I0221 00:07:18.997465 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.017798 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.033673 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.047490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.047549 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.047561 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.047582 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.047599 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:19Z","lastTransitionTime":"2026-02-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.052562 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.071831 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.151854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.151910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.151920 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.151959 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.151974 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:19Z","lastTransitionTime":"2026-02-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.254889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.254976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.254990 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.255007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.255021 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:19Z","lastTransitionTime":"2026-02-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.357924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.358031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.358041 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.358085 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.358101 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:19Z","lastTransitionTime":"2026-02-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.460814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.460880 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.460898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.460925 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.460981 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:19Z","lastTransitionTime":"2026-02-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.564695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.564765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.564785 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.564816 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.564839 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:19Z","lastTransitionTime":"2026-02-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.635665 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:15:41.720488499 +0000 UTC Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.668896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.668982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.669006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.669041 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.669068 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:19Z","lastTransitionTime":"2026-02-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.693217 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:19 crc kubenswrapper[4730]: E0221 00:07:19.693428 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.773279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.773349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.773370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.773398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.773417 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:19Z","lastTransitionTime":"2026-02-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.876386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.876441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.876452 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.876474 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.876486 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:19Z","lastTransitionTime":"2026-02-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.980574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.980659 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.980672 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.980695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:19 crc kubenswrapper[4730]: I0221 00:07:19.980714 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:19Z","lastTransitionTime":"2026-02-21T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.083660 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.083749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.083778 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.083815 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.083849 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:20Z","lastTransitionTime":"2026-02-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.187848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.187978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.188014 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.188115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.188137 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:20Z","lastTransitionTime":"2026-02-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.292039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.292129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.292145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.292170 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.292200 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:20Z","lastTransitionTime":"2026-02-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.395262 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.395317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.395327 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.395342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.395352 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:20Z","lastTransitionTime":"2026-02-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.499270 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.499323 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.499336 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.499356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.499367 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:20Z","lastTransitionTime":"2026-02-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.603081 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.603167 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.603189 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.603225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.603250 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:20Z","lastTransitionTime":"2026-02-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.636610 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:27:09.027977219 +0000 UTC Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.692442 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.692685 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.692705 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:20 crc kubenswrapper[4730]: E0221 00:07:20.692847 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:20 crc kubenswrapper[4730]: E0221 00:07:20.693122 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:20 crc kubenswrapper[4730]: E0221 00:07:20.693337 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.706156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.706199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.706212 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.706229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.706242 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:20Z","lastTransitionTime":"2026-02-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.810178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.810232 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.810244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.810267 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:20 crc kubenswrapper[4730]: I0221 00:07:20.810281 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:20Z","lastTransitionTime":"2026-02-21T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.106763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.106835 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.106855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.106903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.106921 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:21Z","lastTransitionTime":"2026-02-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.212106 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.212259 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.212496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.212531 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.212592 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:21Z","lastTransitionTime":"2026-02-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.233716 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:21 crc kubenswrapper[4730]: E0221 00:07:21.234058 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:21 crc kubenswrapper[4730]: E0221 00:07:21.234167 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs podName:bcf7c949-7646-4b97-9ffa-bf019455ed07 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:29.23413538 +0000 UTC m=+51.245702345 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs") pod "network-metrics-daemon-snhft" (UID: "bcf7c949-7646-4b97-9ffa-bf019455ed07") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.325640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.325715 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.325735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.325767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.325792 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:21Z","lastTransitionTime":"2026-02-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.430639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.430705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.430728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.430757 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.430776 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:21Z","lastTransitionTime":"2026-02-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.534631 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.534734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.534756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.534815 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.534835 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:21Z","lastTransitionTime":"2026-02-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.637579 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:15:15.007899912 +0000 UTC Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.638088 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.638133 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.638150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.638188 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.638210 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:21Z","lastTransitionTime":"2026-02-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.692738 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:21 crc kubenswrapper[4730]: E0221 00:07:21.693064 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.741055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.741111 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.741131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.741160 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.741181 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:21Z","lastTransitionTime":"2026-02-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.845172 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.845240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.845259 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.845290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.845311 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:21Z","lastTransitionTime":"2026-02-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.949409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.949464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.949482 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.949509 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:21 crc kubenswrapper[4730]: I0221 00:07:21.949527 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:21Z","lastTransitionTime":"2026-02-21T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.053169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.053246 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.053266 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.053298 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.053317 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:22Z","lastTransitionTime":"2026-02-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.156789 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.156872 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.156896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.156931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.156997 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:22Z","lastTransitionTime":"2026-02-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.261080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.261165 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.261184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.261217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.261241 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:22Z","lastTransitionTime":"2026-02-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.364672 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.364755 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.364777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.364811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.364892 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:22Z","lastTransitionTime":"2026-02-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.468414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.468495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.468539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.468607 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.468630 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:22Z","lastTransitionTime":"2026-02-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.572770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.572845 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.572863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.572893 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.572917 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:22Z","lastTransitionTime":"2026-02-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.637986 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:04:43.096693808 +0000 UTC Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.675818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.675878 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.675892 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.675917 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.675933 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:22Z","lastTransitionTime":"2026-02-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.692343 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.692438 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:22 crc kubenswrapper[4730]: E0221 00:07:22.692524 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.692623 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:22 crc kubenswrapper[4730]: E0221 00:07:22.692927 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:22 crc kubenswrapper[4730]: E0221 00:07:22.693265 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.778811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.778855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.778866 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.778881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.778893 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:22Z","lastTransitionTime":"2026-02-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.882283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.882392 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.882411 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.882437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.882469 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:22Z","lastTransitionTime":"2026-02-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.986047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.986138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.986163 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.986197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:22 crc kubenswrapper[4730]: I0221 00:07:22.986227 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:22Z","lastTransitionTime":"2026-02-21T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.090513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.090577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.090597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.090630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.090652 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:23Z","lastTransitionTime":"2026-02-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.194599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.194704 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.194734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.194766 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.194792 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:23Z","lastTransitionTime":"2026-02-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.297457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.297505 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.297522 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.297542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.297559 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:23Z","lastTransitionTime":"2026-02-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.401639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.401802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.401826 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.401861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.401883 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:23Z","lastTransitionTime":"2026-02-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.504623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.504692 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.504723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.504762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.504791 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:23Z","lastTransitionTime":"2026-02-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.607493 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.608183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.608235 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.608346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.608492 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:23Z","lastTransitionTime":"2026-02-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.638192 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 12:27:38.017639733 +0000 UTC Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.693078 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:23 crc kubenswrapper[4730]: E0221 00:07:23.693303 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.712743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.712793 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.712813 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.712842 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.712863 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:23Z","lastTransitionTime":"2026-02-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.817012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.817083 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.817103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.817135 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.817158 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:23Z","lastTransitionTime":"2026-02-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.920926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.921033 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.921058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.921091 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:23 crc kubenswrapper[4730]: I0221 00:07:23.921116 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:23Z","lastTransitionTime":"2026-02-21T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.043769 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.043851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.043869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.043895 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.043913 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.146280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.146322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.146334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.146352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.146366 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.248977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.249027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.249035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.249051 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.249061 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.352255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.352303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.352313 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.352334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.352347 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.455058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.455120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.455140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.455175 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.455196 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.558716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.558768 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.558779 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.558798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.558808 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.638514 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 09:29:55.610115935 +0000 UTC Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.661476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.661537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.661550 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.661575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.661590 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.693299 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.693319 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.693462 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:24 crc kubenswrapper[4730]: E0221 00:07:24.693729 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:24 crc kubenswrapper[4730]: E0221 00:07:24.693855 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:24 crc kubenswrapper[4730]: E0221 00:07:24.694076 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.765551 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.765626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.765646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.765677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.765696 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.783921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.784091 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.784115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.784149 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.784180 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: E0221 00:07:24.807201 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.812975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.813052 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.813075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.813108 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.813132 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: E0221 00:07:24.830258 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.835062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.835121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.835141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.835171 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.835196 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: E0221 00:07:24.855683 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.863767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.863842 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.863864 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.863890 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.863904 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: E0221 00:07:24.883974 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.887937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.888038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.888058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.888089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.888109 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:24 crc kubenswrapper[4730]: E0221 00:07:24.904701 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:24 crc kubenswrapper[4730]: E0221 00:07:24.904858 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.907222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.907257 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.907274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.907300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:24 crc kubenswrapper[4730]: I0221 00:07:24.907319 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:24Z","lastTransitionTime":"2026-02-21T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.010047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.010104 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.010123 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.010146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.010162 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:25Z","lastTransitionTime":"2026-02-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.113987 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.114063 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.114082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.114113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.114131 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:25Z","lastTransitionTime":"2026-02-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.217331 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.217425 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.217452 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.217491 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.217515 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:25Z","lastTransitionTime":"2026-02-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.321863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.321999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.322021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.322058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.322083 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:25Z","lastTransitionTime":"2026-02-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.425986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.426047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.426065 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.426092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.426112 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:25Z","lastTransitionTime":"2026-02-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.530724 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.530817 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.530837 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.530870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.530890 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:25Z","lastTransitionTime":"2026-02-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.634477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.634595 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.634616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.634652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.634677 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:25Z","lastTransitionTime":"2026-02-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.638779 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 14:56:36.287866691 +0000 UTC Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.693168 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:25 crc kubenswrapper[4730]: E0221 00:07:25.693369 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.738595 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.738676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.738696 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.738730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.738756 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:25Z","lastTransitionTime":"2026-02-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.842089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.842157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.842177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.842208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.842229 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:25Z","lastTransitionTime":"2026-02-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.945614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.945706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.945735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.945771 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:25 crc kubenswrapper[4730]: I0221 00:07:25.945799 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:25Z","lastTransitionTime":"2026-02-21T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.049995 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.050080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.050104 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.050140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.050165 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:26Z","lastTransitionTime":"2026-02-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.153878 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.153983 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.154003 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.154034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.154055 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:26Z","lastTransitionTime":"2026-02-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.257596 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.257675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.257698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.257730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.257752 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:26Z","lastTransitionTime":"2026-02-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.361267 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.361353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.361384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.361418 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.361443 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:26Z","lastTransitionTime":"2026-02-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.471274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.471381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.471408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.471446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.471470 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:26Z","lastTransitionTime":"2026-02-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.575758 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.575873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.575891 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.575916 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.575968 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:26Z","lastTransitionTime":"2026-02-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.640005 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:36:00.191192762 +0000 UTC Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.679584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.679664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.679688 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.679721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.679744 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:26Z","lastTransitionTime":"2026-02-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.693306 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.693330 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.693405 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:26 crc kubenswrapper[4730]: E0221 00:07:26.693600 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:26 crc kubenswrapper[4730]: E0221 00:07:26.693720 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:26 crc kubenswrapper[4730]: E0221 00:07:26.694039 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.783139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.783213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.783236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.783264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.783285 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:26Z","lastTransitionTime":"2026-02-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.886547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.886617 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.886635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.886668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.886688 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:26Z","lastTransitionTime":"2026-02-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.990731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.990818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.990839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.990869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:26 crc kubenswrapper[4730]: I0221 00:07:26.990892 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:26Z","lastTransitionTime":"2026-02-21T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.094243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.094312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.094332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.094391 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.094412 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:27Z","lastTransitionTime":"2026-02-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.197338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.197387 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.197398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.197418 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.197430 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:27Z","lastTransitionTime":"2026-02-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.301056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.301124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.301151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.301182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.301204 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:27Z","lastTransitionTime":"2026-02-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.404498 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.404554 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.404571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.404597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.404614 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:27Z","lastTransitionTime":"2026-02-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.507457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.507521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.507542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.507573 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.507594 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:27Z","lastTransitionTime":"2026-02-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.610289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.610338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.610357 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.610382 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.610401 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:27Z","lastTransitionTime":"2026-02-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.640499 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:22:53.153983146 +0000 UTC Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.693158 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:27 crc kubenswrapper[4730]: E0221 00:07:27.693396 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.694296 4730 scope.go:117] "RemoveContainer" containerID="49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.713305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.713346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.713358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.713375 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.713390 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:27Z","lastTransitionTime":"2026-02-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.824417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.824873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.824890 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.824916 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.824935 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:27Z","lastTransitionTime":"2026-02-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.928602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.928661 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.928679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.928705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:27 crc kubenswrapper[4730]: I0221 00:07:27.928723 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:27Z","lastTransitionTime":"2026-02-21T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.032917 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.033006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.033025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.033052 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.033076 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:28Z","lastTransitionTime":"2026-02-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.136853 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.136996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.137025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.137058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.137077 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:28Z","lastTransitionTime":"2026-02-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.141054 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/1.log" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.146819 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerStarted","Data":"3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8"} Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.147648 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.171188 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.200480 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.223973 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.240021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.240078 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.240099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.240128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.240147 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:28Z","lastTransitionTime":"2026-02-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.246809 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.266397 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.288492 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.308849 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.327224 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.341291 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.343110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.343186 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.343199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.343244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.343262 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:28Z","lastTransitionTime":"2026-02-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.357023 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.383835 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"Policy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:07:11.147333 6155 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0221 00:07:11.147381 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 00:07:11.147391 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 00:07:11.147408 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 00:07:11.147414 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 00:07:11.147430 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 00:07:11.147462 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:07:11.147496 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 00:07:11.147506 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 00:07:11.147521 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:07:11.147531 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:07:11.147547 6155 factory.go:656] Stopping watch factory\\\\nI0221 00:07:11.147552 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 00:07:11.147570 6155 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.408158 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.422490 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.435110 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.446754 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.446827 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.446877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.446896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.446924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.446978 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:28Z","lastTransitionTime":"2026-02-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.461920 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.476425 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.550349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.550437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.550459 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.550494 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.550517 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:28Z","lastTransitionTime":"2026-02-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.641039 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:18:00.720145136 +0000 UTC Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.653419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.653463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.653478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.653503 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.653517 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:28Z","lastTransitionTime":"2026-02-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.692841 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.692875 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.693022 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:28 crc kubenswrapper[4730]: E0221 00:07:28.693034 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:28 crc kubenswrapper[4730]: E0221 00:07:28.693162 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:28 crc kubenswrapper[4730]: E0221 00:07:28.693269 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.714781 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.736440 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.756024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.756091 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.756110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.756571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.756633 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:28Z","lastTransitionTime":"2026-02-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.757694 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.788495 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.807306 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.827475 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.840625 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.860738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.860809 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.860830 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.860862 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.860881 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:28Z","lastTransitionTime":"2026-02-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.863561 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.888094 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"Policy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:07:11.147333 6155 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0221 00:07:11.147381 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 00:07:11.147391 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 00:07:11.147408 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 00:07:11.147414 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 00:07:11.147430 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 00:07:11.147462 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:07:11.147496 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 00:07:11.147506 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 00:07:11.147521 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:07:11.147531 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:07:11.147547 6155 factory.go:656] Stopping watch factory\\\\nI0221 00:07:11.147552 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 00:07:11.147570 6155 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.919085 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.937631 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.953218 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.963986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.964066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.964090 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.964120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.964143 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:28Z","lastTransitionTime":"2026-02-21T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.971544 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:28 crc kubenswrapper[4730]: I0221 00:07:28.988113 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.002838 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.019569 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.038529 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.066728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.066794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.066811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.066843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.066865 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:29Z","lastTransitionTime":"2026-02-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.154287 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/2.log" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.155271 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/1.log" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.159902 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerID="3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8" exitCode=1 Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.159998 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8"} Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.160083 4730 scope.go:117] "RemoveContainer" containerID="49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.162751 4730 scope.go:117] "RemoveContainer" containerID="3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8" Feb 21 00:07:29 crc kubenswrapper[4730]: E0221 00:07:29.163034 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.170572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.170624 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.170644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.170673 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.170693 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:29Z","lastTransitionTime":"2026-02-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.195134 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.211297 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.229249 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.250572 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:29 crc kubenswrapper[4730]: E0221 00:07:29.250753 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:29 crc kubenswrapper[4730]: E0221 00:07:29.250875 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs podName:bcf7c949-7646-4b97-9ffa-bf019455ed07 nodeName:}" failed. No retries permitted until 2026-02-21 00:07:45.250820535 +0000 UTC m=+67.262387510 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs") pod "network-metrics-daemon-snhft" (UID: "bcf7c949-7646-4b97-9ffa-bf019455ed07") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.253700 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.273495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.273593 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.273618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.273975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.274045 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:29Z","lastTransitionTime":"2026-02-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.289771 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49398dd61b09d5770d6ed42f8d6b6a6b1df76e0b4105070cc124a631cd36dc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"message\\\":\\\"Policy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:07:11.147333 6155 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0221 00:07:11.147381 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 00:07:11.147391 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 00:07:11.147408 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 00:07:11.147414 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 00:07:11.147430 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 00:07:11.147462 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:07:11.147496 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 00:07:11.147506 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 00:07:11.147521 6155 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:07:11.147531 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:07:11.147547 6155 factory.go:656] Stopping watch factory\\\\nI0221 00:07:11.147552 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 00:07:11.147570 6155 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:28Z\\\",\\\"message\\\":\\\"Status{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0221 00:07:28.799603 6376 factory.go:656] Stopping watch factory\\\\nI0221 00:07:28.801626 6376 ovnkube.go:599] Stopped ovnkube\\\\nI0221 00:07:28.797483 6376 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0221 00:07:28.801755 6376 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 00:07:28.801759 6376 services_controller.go:452] Built service openshift-console/downloads per-node LB for network=default: []services.LB{}\\\\nF0221 00:07:28.801987 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.314990 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.330565 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.348338 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.366058 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.378149 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.378220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.378246 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.378280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.378303 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:29Z","lastTransitionTime":"2026-02-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.383460 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.403351 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.420918 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.442091 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.463313 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.481415 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.481463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.481473 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.481492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.481508 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:29Z","lastTransitionTime":"2026-02-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.482047 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.500384 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.516424 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.584749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.584801 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.584816 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.584836 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.584848 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:29Z","lastTransitionTime":"2026-02-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.641884 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:02:19.406250709 +0000 UTC Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.687375 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.687426 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.687438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.687454 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.687466 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:29Z","lastTransitionTime":"2026-02-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.692710 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:29 crc kubenswrapper[4730]: E0221 00:07:29.692916 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.790731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.790825 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.790845 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.790873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.790893 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:29Z","lastTransitionTime":"2026-02-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.894577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.894638 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.894651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.894679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.894693 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:29Z","lastTransitionTime":"2026-02-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.997512 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.997576 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.997590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.997614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:29 crc kubenswrapper[4730]: I0221 00:07:29.997631 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:29Z","lastTransitionTime":"2026-02-21T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.100808 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.100850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.100860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.100875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.100885 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:30Z","lastTransitionTime":"2026-02-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.167004 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/2.log" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.174099 4730 scope.go:117] "RemoveContainer" containerID="3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8" Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.174566 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.190677 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.204929 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.205027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.205048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.205079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.205101 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:30Z","lastTransitionTime":"2026-02-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.209689 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.228968 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.255185 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.276739 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.297741 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.307458 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.307489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.307502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.307518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.307530 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:30Z","lastTransitionTime":"2026-02-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.319909 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.338683 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.356831 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.376246 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.401635 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.411535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.411584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.411598 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.411619 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.411632 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:30Z","lastTransitionTime":"2026-02-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.422123 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.440832 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.458201 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.466587 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.466734 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.466762 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:08:02.466741719 +0000 UTC m=+84.478308664 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.466794 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.466828 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.466858 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.466829 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.467046 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:08:02.467024214 +0000 UTC m=+84.478591159 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.466928 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.467079 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.467092 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.467188 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:08:02.467166828 +0000 UTC m=+84.478733773 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.467201 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.467253 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 00:08:02.46724185 +0000 UTC m=+84.478808795 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.466963 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.467289 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.467307 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.467366 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 00:08:02.467350662 +0000 UTC m=+84.478917697 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.484192 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.511567 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:28Z\\\",\\\"message\\\":\\\"Status{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0221 00:07:28.799603 6376 factory.go:656] Stopping watch factory\\\\nI0221 00:07:28.801626 6376 ovnkube.go:599] Stopped ovnkube\\\\nI0221 00:07:28.797483 6376 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0221 00:07:28.801755 6376 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 00:07:28.801759 6376 services_controller.go:452] Built service openshift-console/downloads per-node LB for network=default: []services.LB{}\\\\nF0221 00:07:28.801987 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.515689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.515751 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.515765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.515788 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.515803 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:30Z","lastTransitionTime":"2026-02-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.545713 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.619140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.619231 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.619250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.619282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.619305 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:30Z","lastTransitionTime":"2026-02-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.642348 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:15:10.537338629 +0000 UTC Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.693078 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.693240 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.693315 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.693264 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.693528 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:30 crc kubenswrapper[4730]: E0221 00:07:30.693605 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.722395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.722431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.722441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.722457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.722469 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:30Z","lastTransitionTime":"2026-02-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.825408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.825972 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.825984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.826005 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.826017 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:30Z","lastTransitionTime":"2026-02-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.929228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.929263 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.929273 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.929291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:30 crc kubenswrapper[4730]: I0221 00:07:30.929302 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:30Z","lastTransitionTime":"2026-02-21T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.032436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.032535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.032554 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.032583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.032602 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:31Z","lastTransitionTime":"2026-02-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.136139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.136202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.136219 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.136243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.136262 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:31Z","lastTransitionTime":"2026-02-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.239573 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.239656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.239680 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.239709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.239731 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:31Z","lastTransitionTime":"2026-02-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.341859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.341910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.341925 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.341968 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.341983 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:31Z","lastTransitionTime":"2026-02-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.445165 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.445213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.445227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.445244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.445256 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:31Z","lastTransitionTime":"2026-02-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.548579 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.548672 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.548691 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.548714 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.548731 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:31Z","lastTransitionTime":"2026-02-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.643505 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:17:45.755197283 +0000 UTC Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.651894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.652017 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.652040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.652069 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.652090 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:31Z","lastTransitionTime":"2026-02-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.692596 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:31 crc kubenswrapper[4730]: E0221 00:07:31.692828 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.755514 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.755574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.755595 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.755617 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.755635 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:31Z","lastTransitionTime":"2026-02-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.859279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.859359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.859370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.859387 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.859398 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:31Z","lastTransitionTime":"2026-02-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.962365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.962408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.962423 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.962437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:31 crc kubenswrapper[4730]: I0221 00:07:31.962447 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:31Z","lastTransitionTime":"2026-02-21T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.065379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.065429 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.065443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.065462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.065476 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:32Z","lastTransitionTime":"2026-02-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.167605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.167657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.167672 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.167692 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.167708 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:32Z","lastTransitionTime":"2026-02-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.270129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.270191 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.270206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.270236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.270249 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:32Z","lastTransitionTime":"2026-02-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.373215 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.373255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.373272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.373289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.373299 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:32Z","lastTransitionTime":"2026-02-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.475911 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.475957 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.475967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.475980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.475988 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:32Z","lastTransitionTime":"2026-02-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.579926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.580013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.580027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.580043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.580055 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:32Z","lastTransitionTime":"2026-02-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.644032 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:24:08.343047289 +0000 UTC Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.682363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.682419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.682432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.682450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.682465 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:32Z","lastTransitionTime":"2026-02-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.692931 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:32 crc kubenswrapper[4730]: E0221 00:07:32.693159 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.692927 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:32 crc kubenswrapper[4730]: E0221 00:07:32.693241 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.692927 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:32 crc kubenswrapper[4730]: E0221 00:07:32.693298 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.744481 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.757918 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.774137 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.785110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.785158 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.785178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.785201 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.785219 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:32Z","lastTransitionTime":"2026-02-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.793924 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.812606 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.832283 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.863261 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:28Z\\\",\\\"message\\\":\\\"Status{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0221 00:07:28.799603 6376 factory.go:656] Stopping watch factory\\\\nI0221 00:07:28.801626 6376 ovnkube.go:599] Stopped ovnkube\\\\nI0221 00:07:28.797483 6376 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0221 00:07:28.801755 6376 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 00:07:28.801759 6376 services_controller.go:452] Built service openshift-console/downloads per-node LB for network=default: []services.LB{}\\\\nF0221 00:07:28.801987 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.882215 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.888795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.888870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.888896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.888929 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.888993 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:32Z","lastTransitionTime":"2026-02-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.909305 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.932114 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.949416 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.970510 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.991162 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.991783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.991843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.991863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.991887 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:32 crc kubenswrapper[4730]: I0221 00:07:32.991904 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:32Z","lastTransitionTime":"2026-02-21T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.013486 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.033044 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.052067 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.070226 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.088880 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.095112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.095203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.095232 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.095269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.095294 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:33Z","lastTransitionTime":"2026-02-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.107113 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.198094 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.198159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.198175 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.198202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.198221 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:33Z","lastTransitionTime":"2026-02-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.301689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.301766 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.301797 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.301830 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.301854 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:33Z","lastTransitionTime":"2026-02-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.406243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.406328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.406351 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.406379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.406399 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:33Z","lastTransitionTime":"2026-02-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.509362 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.509419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.509430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.509448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.509460 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:33Z","lastTransitionTime":"2026-02-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.612382 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.612443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.612462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.612481 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.612498 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:33Z","lastTransitionTime":"2026-02-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.644446 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:36:14.624760078 +0000 UTC Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.693289 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:33 crc kubenswrapper[4730]: E0221 00:07:33.693559 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.715441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.715511 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.715528 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.715552 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.715569 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:33Z","lastTransitionTime":"2026-02-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.818065 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.818160 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.818180 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.818205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.818222 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:33Z","lastTransitionTime":"2026-02-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.922000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.922130 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.922205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.922237 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:33 crc kubenswrapper[4730]: I0221 00:07:33.922315 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:33Z","lastTransitionTime":"2026-02-21T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.025540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.025580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.025592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.025607 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.025618 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:34Z","lastTransitionTime":"2026-02-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.128241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.128292 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.128303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.128321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.128337 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:34Z","lastTransitionTime":"2026-02-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.231538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.231608 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.231631 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.231660 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.231683 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:34Z","lastTransitionTime":"2026-02-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.335044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.335087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.335105 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.335128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.335143 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:34Z","lastTransitionTime":"2026-02-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.438174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.438297 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.438321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.438344 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.438360 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:34Z","lastTransitionTime":"2026-02-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.541258 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.541324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.541341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.541366 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.541385 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:34Z","lastTransitionTime":"2026-02-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.644558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.644601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.644612 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.644629 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.644641 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:34Z","lastTransitionTime":"2026-02-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.644577 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:17:12.415201033 +0000 UTC Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.692331 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:34 crc kubenswrapper[4730]: E0221 00:07:34.692473 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.692685 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:34 crc kubenswrapper[4730]: E0221 00:07:34.692760 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.692983 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:34 crc kubenswrapper[4730]: E0221 00:07:34.693058 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.747370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.747431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.747449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.747472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.747490 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:34Z","lastTransitionTime":"2026-02-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.849830 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.849879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.849896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.849918 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.849935 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:34Z","lastTransitionTime":"2026-02-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.952736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.952804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.952825 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.952856 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:34 crc kubenswrapper[4730]: I0221 00:07:34.952881 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:34Z","lastTransitionTime":"2026-02-21T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.056411 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.056496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.056523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.056555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.056580 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.087211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.087279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.087297 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.087321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.087338 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: E0221 00:07:35.108982 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.114783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.114843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.114865 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.114899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.114922 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: E0221 00:07:35.136070 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.141486 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.141550 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.141567 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.141592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.141610 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: E0221 00:07:35.162018 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.166760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.166832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.166847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.166866 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.166878 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: E0221 00:07:35.189119 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.193449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.193500 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.193518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.193541 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.193557 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: E0221 00:07:35.212003 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:35 crc kubenswrapper[4730]: E0221 00:07:35.212282 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.214863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.214931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.214980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.215038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.215058 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.319160 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.319271 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.319290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.319318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.319342 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.423370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.423447 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.423469 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.423497 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.423517 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.526566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.526646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.526669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.526701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.526721 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.630908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.631006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.631029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.631056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.631076 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.644849 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:12:07.211753463 +0000 UTC Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.692939 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:35 crc kubenswrapper[4730]: E0221 00:07:35.693714 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.735469 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.735527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.735539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.735559 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.735574 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.838961 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.839032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.839054 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.839082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.839099 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.943100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.943183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.943206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.943242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:35 crc kubenswrapper[4730]: I0221 00:07:35.943267 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:35Z","lastTransitionTime":"2026-02-21T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.056546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.056637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.056664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.056695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.056716 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:36Z","lastTransitionTime":"2026-02-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.160468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.160521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.160531 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.160548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.160557 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:36Z","lastTransitionTime":"2026-02-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.263270 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.263353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.263382 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.263399 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.263409 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:36Z","lastTransitionTime":"2026-02-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.367343 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.367680 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.367818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.368049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.368200 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:36Z","lastTransitionTime":"2026-02-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.471265 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.471368 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.471386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.471448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.471468 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:36Z","lastTransitionTime":"2026-02-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.574557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.574621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.574638 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.574664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.574682 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:36Z","lastTransitionTime":"2026-02-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.645088 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:19:13.380623381 +0000 UTC Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.677397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.677478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.677499 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.677532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.677557 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:36Z","lastTransitionTime":"2026-02-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.692924 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:36 crc kubenswrapper[4730]: E0221 00:07:36.693246 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.694145 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.694260 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:36 crc kubenswrapper[4730]: E0221 00:07:36.694360 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:36 crc kubenswrapper[4730]: E0221 00:07:36.694451 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.785013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.785090 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.785113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.785143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.785165 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:36Z","lastTransitionTime":"2026-02-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.887786 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.887825 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.887834 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.887846 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.887856 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:36Z","lastTransitionTime":"2026-02-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.990846 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.991172 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.991180 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.991194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:36 crc kubenswrapper[4730]: I0221 00:07:36.991203 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:36Z","lastTransitionTime":"2026-02-21T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.094057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.094115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.094134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.094157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.094175 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:37Z","lastTransitionTime":"2026-02-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.197760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.197814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.197825 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.197842 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.197854 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:37Z","lastTransitionTime":"2026-02-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.300701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.300745 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.300756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.300772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.300783 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:37Z","lastTransitionTime":"2026-02-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.404173 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.404239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.404267 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.404299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.404323 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:37Z","lastTransitionTime":"2026-02-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.506886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.506954 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.506964 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.506976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.506984 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:37Z","lastTransitionTime":"2026-02-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.610359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.610728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.610892 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.611107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.611259 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:37Z","lastTransitionTime":"2026-02-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.645457 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 01:52:28.388011452 +0000 UTC Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.692544 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:37 crc kubenswrapper[4730]: E0221 00:07:37.692764 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.714879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.714916 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.714925 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.714939 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.714964 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:37Z","lastTransitionTime":"2026-02-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.818819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.818883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.818901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.818927 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.818974 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:37Z","lastTransitionTime":"2026-02-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.922654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.922698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.922707 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.922722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:37 crc kubenswrapper[4730]: I0221 00:07:37.922733 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:37Z","lastTransitionTime":"2026-02-21T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.025405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.025442 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.025453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.025469 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.025478 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:38Z","lastTransitionTime":"2026-02-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.128666 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.128747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.128769 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.128805 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.128826 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:38Z","lastTransitionTime":"2026-02-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.231197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.231259 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.231276 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.231302 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.231322 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:38Z","lastTransitionTime":"2026-02-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.334646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.334706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.334722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.334746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.334763 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:38Z","lastTransitionTime":"2026-02-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.437844 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.437913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.437939 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.438005 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.438045 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:38Z","lastTransitionTime":"2026-02-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.540624 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.540688 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.540707 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.540732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.540751 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:38Z","lastTransitionTime":"2026-02-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.643220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.643284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.643303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.643329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.643349 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:38Z","lastTransitionTime":"2026-02-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.646650 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:03:50.978538692 +0000 UTC Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.692462 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.692698 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.692756 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:38 crc kubenswrapper[4730]: E0221 00:07:38.692821 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:38 crc kubenswrapper[4730]: E0221 00:07:38.697667 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:38 crc kubenswrapper[4730]: E0221 00:07:38.698085 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.715112 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.734420 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.749134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.749170 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.749180 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.749211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.749224 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:38Z","lastTransitionTime":"2026-02-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.753574 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.771239 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.796030 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.830586 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:28Z\\\",\\\"message\\\":\\\"Status{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0221 00:07:28.799603 6376 factory.go:656] Stopping watch factory\\\\nI0221 00:07:28.801626 6376 ovnkube.go:599] Stopped ovnkube\\\\nI0221 00:07:28.797483 6376 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0221 00:07:28.801755 6376 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 00:07:28.801759 6376 services_controller.go:452] Built service openshift-console/downloads per-node LB for network=default: []services.LB{}\\\\nF0221 00:07:28.801987 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.851475 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab89c178-119a-439e-a93f-f515b431ba96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fed08b7ba240311a68caae197a7a3643593c810b92f5c6d10c28584084d920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a544d47c97f806a94461291f56bea6eb69c99909d44bd61b50787beddda269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b411a288944c0c29798f478163a824e3cb80076e49c0e23695b4318f872c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.852878 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.852942 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.853063 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.853093 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.853110 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:38Z","lastTransitionTime":"2026-02-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.887869 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.904636 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.922616 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.939090 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.957847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.957933 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.957975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.958000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.958019 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:38Z","lastTransitionTime":"2026-02-21T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.959175 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.975373 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:38 crc kubenswrapper[4730]: I0221 00:07:38.990156 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.006540 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:39Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.025929 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:39Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.041201 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:39Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.060466 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:39Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.061045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.061076 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.061085 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.061116 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.061127 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:39Z","lastTransitionTime":"2026-02-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.164075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.164132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.164145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.164165 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.164178 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:39Z","lastTransitionTime":"2026-02-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.266688 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.266746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.266765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.266790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.266808 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:39Z","lastTransitionTime":"2026-02-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.370318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.370393 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.370411 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.370440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.370459 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:39Z","lastTransitionTime":"2026-02-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.473687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.473742 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.473751 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.473765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.473774 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:39Z","lastTransitionTime":"2026-02-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.578004 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.578092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.578114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.578142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.578160 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:39Z","lastTransitionTime":"2026-02-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.647157 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:58:03.71713349 +0000 UTC Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.681758 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.681836 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.681855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.681881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.681898 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:39Z","lastTransitionTime":"2026-02-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.693111 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:39 crc kubenswrapper[4730]: E0221 00:07:39.693294 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.785735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.785807 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.785822 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.785844 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.785859 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:39Z","lastTransitionTime":"2026-02-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.888774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.888831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.888849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.888873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.888889 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:39Z","lastTransitionTime":"2026-02-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.991763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.991817 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.991831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.991852 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:39 crc kubenswrapper[4730]: I0221 00:07:39.991864 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:39Z","lastTransitionTime":"2026-02-21T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.095725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.095796 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.095814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.095845 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.095865 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:40Z","lastTransitionTime":"2026-02-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.199993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.200113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.200142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.200182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.200202 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:40Z","lastTransitionTime":"2026-02-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.304152 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.304225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.304241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.304272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.304290 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:40Z","lastTransitionTime":"2026-02-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.408602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.408663 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.408682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.408711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.408731 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:40Z","lastTransitionTime":"2026-02-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.512138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.512221 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.512238 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.512266 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.512285 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:40Z","lastTransitionTime":"2026-02-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.615380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.615464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.615489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.615524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.615545 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:40Z","lastTransitionTime":"2026-02-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.647818 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:23:17.846223261 +0000 UTC Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.693297 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:40 crc kubenswrapper[4730]: E0221 00:07:40.693512 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.693541 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:40 crc kubenswrapper[4730]: E0221 00:07:40.693707 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.693686 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:40 crc kubenswrapper[4730]: E0221 00:07:40.694098 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.719272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.719339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.719380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.719412 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.719430 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:40Z","lastTransitionTime":"2026-02-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.822269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.822720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.822738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.822762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.822778 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:40Z","lastTransitionTime":"2026-02-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.926092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.926138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.926151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.926194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:40 crc kubenswrapper[4730]: I0221 00:07:40.926211 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:40Z","lastTransitionTime":"2026-02-21T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.029311 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.029449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.029468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.029550 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.029573 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:41Z","lastTransitionTime":"2026-02-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.133079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.133144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.133163 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.133193 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.133210 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:41Z","lastTransitionTime":"2026-02-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.235884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.235932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.235977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.236002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.236017 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:41Z","lastTransitionTime":"2026-02-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.338676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.338720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.338731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.338748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.338759 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:41Z","lastTransitionTime":"2026-02-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.441769 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.441813 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.441824 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.441841 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.441851 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:41Z","lastTransitionTime":"2026-02-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.544542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.544592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.544608 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.544630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.544647 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:41Z","lastTransitionTime":"2026-02-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.647058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.647109 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.647127 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.647151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.647168 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:41Z","lastTransitionTime":"2026-02-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.648469 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:47:38.681257372 +0000 UTC Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.692723 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:41 crc kubenswrapper[4730]: E0221 00:07:41.692849 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.749762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.749815 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.749831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.749854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.749876 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:41Z","lastTransitionTime":"2026-02-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.853810 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.853879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.853897 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.853922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.853973 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:41Z","lastTransitionTime":"2026-02-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.957299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.957370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.957389 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.957419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:41 crc kubenswrapper[4730]: I0221 00:07:41.957439 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:41Z","lastTransitionTime":"2026-02-21T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.060718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.060771 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.060789 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.060816 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.060835 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:42Z","lastTransitionTime":"2026-02-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.164282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.164329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.164342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.164359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.164371 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:42Z","lastTransitionTime":"2026-02-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.265931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.266008 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.266024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.266047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.266064 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:42Z","lastTransitionTime":"2026-02-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.369479 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.369573 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.369596 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.369626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.369646 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:42Z","lastTransitionTime":"2026-02-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.472456 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.472522 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.472539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.472564 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.472583 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:42Z","lastTransitionTime":"2026-02-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.576147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.576189 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.576198 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.576212 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.576220 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:42Z","lastTransitionTime":"2026-02-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.649486 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:05:31.413639981 +0000 UTC Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.678099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.678171 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.678186 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.678202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.678212 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:42Z","lastTransitionTime":"2026-02-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.693177 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.693192 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:42 crc kubenswrapper[4730]: E0221 00:07:42.693280 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.693524 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:42 crc kubenswrapper[4730]: E0221 00:07:42.693528 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:42 crc kubenswrapper[4730]: E0221 00:07:42.693711 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.780597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.780630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.780641 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.780654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.780663 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:42Z","lastTransitionTime":"2026-02-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.882205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.882238 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.882247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.882259 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.882268 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:42Z","lastTransitionTime":"2026-02-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.984450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.984490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.984498 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.984516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:42 crc kubenswrapper[4730]: I0221 00:07:42.984527 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:42Z","lastTransitionTime":"2026-02-21T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.087225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.087261 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.087272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.087288 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.087297 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:43Z","lastTransitionTime":"2026-02-21T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.189109 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.189150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.189159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.189180 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.189197 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:43Z","lastTransitionTime":"2026-02-21T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.292060 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.292099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.292108 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.292122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.292131 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:43Z","lastTransitionTime":"2026-02-21T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.394980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.395037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.395050 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.395070 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.395083 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:43Z","lastTransitionTime":"2026-02-21T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.496920 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.496995 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.497007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.497025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.497036 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:43Z","lastTransitionTime":"2026-02-21T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.599602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.599640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.599655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.599674 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.599686 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:43Z","lastTransitionTime":"2026-02-21T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.650026 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:08:13.599029887 +0000 UTC Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.692514 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:43 crc kubenswrapper[4730]: E0221 00:07:43.692751 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.702327 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.702390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.702402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.702420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.702432 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:43Z","lastTransitionTime":"2026-02-21T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.805295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.805346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.805358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.805375 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.805387 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:43Z","lastTransitionTime":"2026-02-21T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.908324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.908370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.908381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.908399 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:43 crc kubenswrapper[4730]: I0221 00:07:43.908412 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:43Z","lastTransitionTime":"2026-02-21T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.010873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.010906 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.010916 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.010930 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.010957 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:44Z","lastTransitionTime":"2026-02-21T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.113823 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.113875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.113886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.113902 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.113912 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:44Z","lastTransitionTime":"2026-02-21T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.216697 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.216763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.216781 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.216807 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.216826 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:44Z","lastTransitionTime":"2026-02-21T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.318957 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.318997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.319007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.319025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.319037 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:44Z","lastTransitionTime":"2026-02-21T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.421631 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.421711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.421734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.421763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.421785 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:44Z","lastTransitionTime":"2026-02-21T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.524150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.524178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.524189 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.524202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.524212 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:44Z","lastTransitionTime":"2026-02-21T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.626791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.626832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.626843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.626858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.626869 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:44Z","lastTransitionTime":"2026-02-21T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.650829 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:03:00.397980186 +0000 UTC Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.694240 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.694289 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.694247 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:44 crc kubenswrapper[4730]: E0221 00:07:44.694352 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:44 crc kubenswrapper[4730]: E0221 00:07:44.694428 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:44 crc kubenswrapper[4730]: E0221 00:07:44.694813 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.695167 4730 scope.go:117] "RemoveContainer" containerID="3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8" Feb 21 00:07:44 crc kubenswrapper[4730]: E0221 00:07:44.695410 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.728894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.728967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.728977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.728992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.729001 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:44Z","lastTransitionTime":"2026-02-21T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.831246 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.831275 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.831285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.831299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.831311 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:44Z","lastTransitionTime":"2026-02-21T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.933310 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.933346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.933356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.933371 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:44 crc kubenswrapper[4730]: I0221 00:07:44.933383 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:44Z","lastTransitionTime":"2026-02-21T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.035503 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.035546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.035557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.035577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.035588 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.138875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.138927 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.138956 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.138978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.138989 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.223846 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.223888 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.223898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.223914 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.223926 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: E0221 00:07:45.235610 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.239374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.239400 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.239408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.239421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.239431 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.251189 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:45 crc kubenswrapper[4730]: E0221 00:07:45.251347 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:45 crc kubenswrapper[4730]: E0221 00:07:45.251197 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:45 crc kubenswrapper[4730]: E0221 00:07:45.251415 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs podName:bcf7c949-7646-4b97-9ffa-bf019455ed07 nodeName:}" failed. No retries permitted until 2026-02-21 00:08:17.251398471 +0000 UTC m=+99.262965396 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs") pod "network-metrics-daemon-snhft" (UID: "bcf7c949-7646-4b97-9ffa-bf019455ed07") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.253694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.253725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.253737 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.253756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.253769 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: E0221 00:07:45.269427 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.273595 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.273819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.273843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.273875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.273896 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: E0221 00:07:45.286939 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.290610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.290631 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.290658 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.290671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.290680 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: E0221 00:07:45.301465 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:45Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:45 crc kubenswrapper[4730]: E0221 00:07:45.301681 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.303127 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.303159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.303169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.303185 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.303197 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.405588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.405635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.405646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.405662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.405680 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.508065 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.508098 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.508109 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.508125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.508136 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.609774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.609797 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.609805 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.609818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.609827 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.651073 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:03:00.785915796 +0000 UTC Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.692664 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:45 crc kubenswrapper[4730]: E0221 00:07:45.692750 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.712144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.712173 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.712182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.712193 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.712205 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.814933 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.815038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.815061 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.815090 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.815111 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.917681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.917724 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.917739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.917763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:45 crc kubenswrapper[4730]: I0221 00:07:45.917779 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:45Z","lastTransitionTime":"2026-02-21T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.020914 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.021386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.021547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.021732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.021915 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:46Z","lastTransitionTime":"2026-02-21T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.125004 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.125272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.125414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.125540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.125634 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:46Z","lastTransitionTime":"2026-02-21T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.228378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.228676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.228767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.228851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.228928 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:46Z","lastTransitionTime":"2026-02-21T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.331142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.331480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.331661 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.331873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.332398 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:46Z","lastTransitionTime":"2026-02-21T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.435187 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.435228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.435242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.435261 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.435275 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:46Z","lastTransitionTime":"2026-02-21T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.537428 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.537978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.538065 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.538159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.538251 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:46Z","lastTransitionTime":"2026-02-21T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.641222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.641498 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.641642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.641787 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.641920 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:46Z","lastTransitionTime":"2026-02-21T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.651580 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 23:04:55.196890614 +0000 UTC Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.693140 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.693188 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:46 crc kubenswrapper[4730]: E0221 00:07:46.693244 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.693282 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:46 crc kubenswrapper[4730]: E0221 00:07:46.693393 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:46 crc kubenswrapper[4730]: E0221 00:07:46.693477 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.743849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.743914 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.743967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.743996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.744017 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:46Z","lastTransitionTime":"2026-02-21T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.846256 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.846309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.846326 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.846348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.846369 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:46Z","lastTransitionTime":"2026-02-21T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.948604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.948900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.949129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.949349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:46 crc kubenswrapper[4730]: I0221 00:07:46.949558 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:46Z","lastTransitionTime":"2026-02-21T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.052245 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.052284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.052292 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.052306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.052316 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:47Z","lastTransitionTime":"2026-02-21T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.155137 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.155198 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.155214 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.155243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.155262 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:47Z","lastTransitionTime":"2026-02-21T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.234790 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gsndg_900f07ef-9762-49ec-9551-41a6ce12659d/kube-multus/0.log" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.234881 4730 generic.go:334] "Generic (PLEG): container finished" podID="900f07ef-9762-49ec-9551-41a6ce12659d" containerID="8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c" exitCode=1 Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.234935 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gsndg" event={"ID":"900f07ef-9762-49ec-9551-41a6ce12659d","Type":"ContainerDied","Data":"8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c"} Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.235701 4730 scope.go:117] "RemoveContainer" containerID="8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.251372 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.257446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.257678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.257851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.258096 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.258300 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:47Z","lastTransitionTime":"2026-02-21T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.266427 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.285741 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.304988 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.320178 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.342403 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.356324 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:46Z\\\",\\\"message\\\":\\\"2026-02-21T00:07:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59\\\\n2026-02-21T00:07:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59 to /host/opt/cni/bin/\\\\n2026-02-21T00:07:01Z [verbose] multus-daemon started\\\\n2026-02-21T00:07:01Z [verbose] Readiness Indicator file check\\\\n2026-02-21T00:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.360571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.360601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.360631 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.360651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.360661 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:47Z","lastTransitionTime":"2026-02-21T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.370364 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.383348 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.401703 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.423448 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:28Z\\\",\\\"message\\\":\\\"Status{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0221 00:07:28.799603 6376 factory.go:656] Stopping watch factory\\\\nI0221 00:07:28.801626 6376 ovnkube.go:599] Stopped ovnkube\\\\nI0221 00:07:28.797483 6376 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0221 00:07:28.801755 6376 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 00:07:28.801759 6376 services_controller.go:452] Built service openshift-console/downloads per-node LB for network=default: []services.LB{}\\\\nF0221 00:07:28.801987 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.436698 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab89c178-119a-439e-a93f-f515b431ba96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fed08b7ba240311a68caae197a7a3643593c810b92f5c6d10c28584084d920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a544d47c97f806a94461291f56bea6eb69c99909d44bd61b50787beddda269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b411a288944c0c29798f478163a824e3cb80076e49c0e23695b4318f872c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.462806 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.463047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.463077 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.463091 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.463121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.463137 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:47Z","lastTransitionTime":"2026-02-21T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.476127 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.488677 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.498727 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.511230 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.525626 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:47Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.565872 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.565913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.565923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.565958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.565971 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:47Z","lastTransitionTime":"2026-02-21T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.652083 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:38:27.623824068 +0000 UTC Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.668738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.668788 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.668800 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.668820 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.668832 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:47Z","lastTransitionTime":"2026-02-21T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.692531 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:47 crc kubenswrapper[4730]: E0221 00:07:47.692725 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.772549 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.772635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.772662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.772697 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.772720 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:47Z","lastTransitionTime":"2026-02-21T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.875306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.875349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.875360 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.875375 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.875402 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:47Z","lastTransitionTime":"2026-02-21T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.977704 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.977736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.977748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.977763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:47 crc kubenswrapper[4730]: I0221 00:07:47.977780 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:47Z","lastTransitionTime":"2026-02-21T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.085022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.085058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.085068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.085083 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.085092 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:48Z","lastTransitionTime":"2026-02-21T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.188352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.188407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.188417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.188437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.188452 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:48Z","lastTransitionTime":"2026-02-21T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.240374 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gsndg_900f07ef-9762-49ec-9551-41a6ce12659d/kube-multus/0.log" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.240453 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gsndg" event={"ID":"900f07ef-9762-49ec-9551-41a6ce12659d","Type":"ContainerStarted","Data":"510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833"} Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.253933 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.270047 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.290001 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:28Z\\\",\\\"message\\\":\\\"Status{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0221 00:07:28.799603 6376 factory.go:656] Stopping watch factory\\\\nI0221 00:07:28.801626 6376 ovnkube.go:599] Stopped ovnkube\\\\nI0221 00:07:28.797483 6376 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0221 00:07:28.801755 6376 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 00:07:28.801759 6376 services_controller.go:452] Built service openshift-console/downloads per-node LB for network=default: []services.LB{}\\\\nF0221 00:07:28.801987 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.291399 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.291432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.291441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.291459 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.291471 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:48Z","lastTransitionTime":"2026-02-21T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.304241 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab89c178-119a-439e-a93f-f515b431ba96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fed08b7ba240311a68caae197a7a3643593c810b92f5c6d10c28584084d920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a544d47c97f806a94461291f56bea6eb69c99909d44bd61b50787beddda269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b411a288944c0c29798f478163a824e3cb80076e49c0e23695b4318f872c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.322139 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.332619 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.343473 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.352060 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.364816 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.377017 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.388639 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.393481 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.393523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.393535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.393558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.393573 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:48Z","lastTransitionTime":"2026-02-21T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.402194 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.414196 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.425390 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.434567 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.447414 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.460362 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:46Z\\\",\\\"message\\\":\\\"2026-02-21T00:07:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59\\\\n2026-02-21T00:07:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59 to /host/opt/cni/bin/\\\\n2026-02-21T00:07:01Z [verbose] multus-daemon started\\\\n2026-02-21T00:07:01Z [verbose] Readiness Indicator file check\\\\n2026-02-21T00:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.471060 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.496618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.496669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.496680 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.496700 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.496715 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:48Z","lastTransitionTime":"2026-02-21T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.599577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.599630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.599650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.599677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.599698 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:48Z","lastTransitionTime":"2026-02-21T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.652213 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:40:20.666926831 +0000 UTC Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.693313 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.693372 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:48 crc kubenswrapper[4730]: E0221 00:07:48.693434 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:48 crc kubenswrapper[4730]: E0221 00:07:48.693644 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.694540 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:48 crc kubenswrapper[4730]: E0221 00:07:48.694732 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.702097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.702132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.702141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.702183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.702195 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:48Z","lastTransitionTime":"2026-02-21T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.706168 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.717495 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.738360 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.758515 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:28Z\\\",\\\"message\\\":\\\"Status{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0221 00:07:28.799603 6376 factory.go:656] Stopping watch factory\\\\nI0221 00:07:28.801626 6376 ovnkube.go:599] Stopped ovnkube\\\\nI0221 00:07:28.797483 6376 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0221 00:07:28.801755 6376 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 00:07:28.801759 6376 services_controller.go:452] Built service openshift-console/downloads per-node LB for network=default: []services.LB{}\\\\nF0221 00:07:28.801987 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.770588 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab89c178-119a-439e-a93f-f515b431ba96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fed08b7ba240311a68caae197a7a3643593c810b92f5c6d10c28584084d920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a544d47c97f806a94461291f56bea6eb69c99909d44bd61b50787beddda269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b411a288944c0c29798f478163a824e3cb80076e49c0e23695b4318f872c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.787332 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.797753 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.804630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.804677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.804696 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.804717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.804732 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:48Z","lastTransitionTime":"2026-02-21T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.809207 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.818347 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.831790 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.846022 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.858056 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.871906 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.887376 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.900321 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.907427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.907462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.907475 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.907492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.907503 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:48Z","lastTransitionTime":"2026-02-21T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.912029 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.930032 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:48 crc kubenswrapper[4730]: I0221 00:07:48.943239 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:46Z\\\",\\\"message\\\":\\\"2026-02-21T00:07:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59\\\\n2026-02-21T00:07:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59 to /host/opt/cni/bin/\\\\n2026-02-21T00:07:01Z [verbose] multus-daemon started\\\\n2026-02-21T00:07:01Z [verbose] Readiness Indicator file check\\\\n2026-02-21T00:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:48Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.009725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.009780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.009791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.009825 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.009836 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:49Z","lastTransitionTime":"2026-02-21T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.112664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.112741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.112768 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.112802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.112827 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:49Z","lastTransitionTime":"2026-02-21T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.215279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.215849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.215869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.215899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.215917 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:49Z","lastTransitionTime":"2026-02-21T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.319772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.319840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.319859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.319897 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.319919 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:49Z","lastTransitionTime":"2026-02-21T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.423509 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.423592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.423616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.423665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.423683 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:49Z","lastTransitionTime":"2026-02-21T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.527092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.527157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.527174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.527202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.527220 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:49Z","lastTransitionTime":"2026-02-21T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.631305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.631372 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.631390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.631417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.631435 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:49Z","lastTransitionTime":"2026-02-21T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.652766 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:23:52.645286314 +0000 UTC Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.692311 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:49 crc kubenswrapper[4730]: E0221 00:07:49.692501 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.735291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.735355 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.735370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.735397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.735415 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:49Z","lastTransitionTime":"2026-02-21T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.838446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.838495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.838505 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.838524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.838536 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:49Z","lastTransitionTime":"2026-02-21T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.942093 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.942147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.942163 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.942181 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:49 crc kubenswrapper[4730]: I0221 00:07:49.942191 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:49Z","lastTransitionTime":"2026-02-21T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.045901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.045959 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.045973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.045990 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.046001 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:50Z","lastTransitionTime":"2026-02-21T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.148602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.148634 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.148649 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.148663 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.148673 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:50Z","lastTransitionTime":"2026-02-21T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.250722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.250763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.250775 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.250792 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.250801 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:50Z","lastTransitionTime":"2026-02-21T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.354432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.354479 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.354492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.354511 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.354522 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:50Z","lastTransitionTime":"2026-02-21T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.457543 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.457585 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.457593 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.457611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.457621 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:50Z","lastTransitionTime":"2026-02-21T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.560316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.560382 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.560404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.560430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.560450 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:50Z","lastTransitionTime":"2026-02-21T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.653312 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:09:58.577964492 +0000 UTC Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.669255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.669342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.669368 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.669402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.669431 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:50Z","lastTransitionTime":"2026-02-21T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.692827 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:50 crc kubenswrapper[4730]: E0221 00:07:50.693093 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.693190 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.693259 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:50 crc kubenswrapper[4730]: E0221 00:07:50.693369 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:50 crc kubenswrapper[4730]: E0221 00:07:50.693491 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.771511 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.771541 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.771549 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.771562 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.771570 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:50Z","lastTransitionTime":"2026-02-21T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.874255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.874341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.874368 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.874409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.874436 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:50Z","lastTransitionTime":"2026-02-21T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.977557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.977602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.977614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.977633 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:50 crc kubenswrapper[4730]: I0221 00:07:50.977645 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:50Z","lastTransitionTime":"2026-02-21T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.080741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.080805 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.080828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.080854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.080871 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:51Z","lastTransitionTime":"2026-02-21T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.184040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.184107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.184131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.184160 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.184182 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:51Z","lastTransitionTime":"2026-02-21T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.286020 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.286082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.286100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.286122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.286139 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:51Z","lastTransitionTime":"2026-02-21T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.389069 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.389114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.389126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.389144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.389155 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:51Z","lastTransitionTime":"2026-02-21T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.491742 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.491781 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.491790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.491804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.491827 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:51Z","lastTransitionTime":"2026-02-21T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.594455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.594496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.594504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.594519 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.594529 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:51Z","lastTransitionTime":"2026-02-21T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.653984 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:15:06.139346856 +0000 UTC Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.692366 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:51 crc kubenswrapper[4730]: E0221 00:07:51.692471 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.697518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.697559 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.697570 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.697586 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.697598 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:51Z","lastTransitionTime":"2026-02-21T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.799532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.799565 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.799574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.799587 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.799598 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:51Z","lastTransitionTime":"2026-02-21T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.901971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.902033 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.902049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.902073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:51 crc kubenswrapper[4730]: I0221 00:07:51.902091 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:51Z","lastTransitionTime":"2026-02-21T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.004632 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.004687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.004703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.004725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.004742 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:52Z","lastTransitionTime":"2026-02-21T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.107141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.107200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.107221 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.107247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.107267 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:52Z","lastTransitionTime":"2026-02-21T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.210269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.210367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.210390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.210417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.210438 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:52Z","lastTransitionTime":"2026-02-21T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.313201 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.313265 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.313288 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.313316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.313337 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:52Z","lastTransitionTime":"2026-02-21T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.417113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.417529 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.417559 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.417589 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.417611 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:52Z","lastTransitionTime":"2026-02-21T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.521279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.521371 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.521394 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.521426 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.521447 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:52Z","lastTransitionTime":"2026-02-21T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.625092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.625172 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.625189 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.625212 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.625231 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:52Z","lastTransitionTime":"2026-02-21T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.654831 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:22:13.746293831 +0000 UTC Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.693244 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.693259 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:52 crc kubenswrapper[4730]: E0221 00:07:52.693455 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.693259 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:52 crc kubenswrapper[4730]: E0221 00:07:52.693516 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:52 crc kubenswrapper[4730]: E0221 00:07:52.693656 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.727428 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.727494 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.727517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.727546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.727571 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:52Z","lastTransitionTime":"2026-02-21T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.831667 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.831739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.831761 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.831794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.831818 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:52Z","lastTransitionTime":"2026-02-21T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.935646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.935694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.935705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.935720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:52 crc kubenswrapper[4730]: I0221 00:07:52.935731 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:52Z","lastTransitionTime":"2026-02-21T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.038049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.038086 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.038098 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.038114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.038124 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:53Z","lastTransitionTime":"2026-02-21T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.140075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.140130 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.140147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.140173 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.140191 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:53Z","lastTransitionTime":"2026-02-21T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.243207 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.243269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.243286 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.243314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.243332 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:53Z","lastTransitionTime":"2026-02-21T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.345775 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.345872 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.345883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.345898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.345907 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:53Z","lastTransitionTime":"2026-02-21T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.449011 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.449050 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.449058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.449072 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.449083 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:53Z","lastTransitionTime":"2026-02-21T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.551894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.552001 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.552028 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.552060 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.552084 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:53Z","lastTransitionTime":"2026-02-21T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.654442 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.654488 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.654502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.654522 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.654538 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:53Z","lastTransitionTime":"2026-02-21T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.655331 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:06:32.974678218 +0000 UTC Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.692826 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:53 crc kubenswrapper[4730]: E0221 00:07:53.693081 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.756687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.756725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.756736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.756755 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.756766 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:53Z","lastTransitionTime":"2026-02-21T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.859665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.859835 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.859865 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.859894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.859915 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:53Z","lastTransitionTime":"2026-02-21T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.962618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.962671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.962682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.962696 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:53 crc kubenswrapper[4730]: I0221 00:07:53.962706 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:53Z","lastTransitionTime":"2026-02-21T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.066467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.066536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.066557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.066590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.066612 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:54Z","lastTransitionTime":"2026-02-21T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.171182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.171231 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.171247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.171269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.171287 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:54Z","lastTransitionTime":"2026-02-21T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.276462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.276592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.276611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.276635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.276686 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:54Z","lastTransitionTime":"2026-02-21T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.380020 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.380100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.380123 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.380156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.380180 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:54Z","lastTransitionTime":"2026-02-21T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.483373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.483428 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.483440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.483459 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.483471 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:54Z","lastTransitionTime":"2026-02-21T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.586635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.586687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.586711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.586744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.586764 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:54Z","lastTransitionTime":"2026-02-21T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.656503 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:57:29.9736291 +0000 UTC Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.689721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.689773 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.689796 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.689824 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.689845 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:54Z","lastTransitionTime":"2026-02-21T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.693281 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.693374 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:54 crc kubenswrapper[4730]: E0221 00:07:54.693419 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:54 crc kubenswrapper[4730]: E0221 00:07:54.693541 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.693731 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:54 crc kubenswrapper[4730]: E0221 00:07:54.693823 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.793420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.793489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.793515 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.793544 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.793567 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:54Z","lastTransitionTime":"2026-02-21T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.896199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.896252 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.896263 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.896284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.896298 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:54Z","lastTransitionTime":"2026-02-21T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.998310 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.998364 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.998376 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.998396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:54 crc kubenswrapper[4730]: I0221 00:07:54.998408 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:54Z","lastTransitionTime":"2026-02-21T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.101269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.101338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.101360 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.101390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.101415 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.203177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.203205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.203212 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.203227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.203235 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.306147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.306221 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.306242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.306275 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.306300 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.409867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.409926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.409969 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.409994 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.410011 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.513700 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.513757 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.513779 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.513810 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.513831 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.525584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.525720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.525826 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.525915 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.526013 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: E0221 00:07:55.554975 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.560635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.560732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.560750 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.560775 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.560795 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: E0221 00:07:55.581808 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.586633 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.586896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.587128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.587357 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.587538 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: E0221 00:07:55.612874 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.618600 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.618837 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.619040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.619208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.619336 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: E0221 00:07:55.638520 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.644186 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.644258 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.644282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.644312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.644333 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.657355 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:09:36.782939412 +0000 UTC Feb 21 00:07:55 crc kubenswrapper[4730]: E0221 00:07:55.661732 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:55 crc kubenswrapper[4730]: E0221 00:07:55.662060 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.664325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.664373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.664389 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.664410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.664429 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.692414 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:55 crc kubenswrapper[4730]: E0221 00:07:55.692542 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.767703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.767748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.767762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.767778 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.767791 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.871146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.871242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.871266 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.871294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.871312 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.973967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.973995 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.974004 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.974018 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:55 crc kubenswrapper[4730]: I0221 00:07:55.974028 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:55Z","lastTransitionTime":"2026-02-21T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.076754 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.076798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.076810 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.076826 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.076839 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:56Z","lastTransitionTime":"2026-02-21T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.179203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.179280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.179305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.179333 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.179351 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:56Z","lastTransitionTime":"2026-02-21T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.282075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.282129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.282146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.282169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.282187 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:56Z","lastTransitionTime":"2026-02-21T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.387333 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.387400 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.387416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.387444 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.387460 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:56Z","lastTransitionTime":"2026-02-21T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.490834 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.490886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.490901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.490921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.490935 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:56Z","lastTransitionTime":"2026-02-21T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.593931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.593995 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.594007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.594024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.594035 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:56Z","lastTransitionTime":"2026-02-21T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.658850 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:24:19.595003561 +0000 UTC Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.692665 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.692691 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:56 crc kubenswrapper[4730]: E0221 00:07:56.692833 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.692979 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:56 crc kubenswrapper[4730]: E0221 00:07:56.693089 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:56 crc kubenswrapper[4730]: E0221 00:07:56.693187 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.697055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.697110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.697222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.697317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.697350 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:56Z","lastTransitionTime":"2026-02-21T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.801750 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.801804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.801822 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.801852 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.801873 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:56Z","lastTransitionTime":"2026-02-21T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.905273 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.905343 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.905366 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.905395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:56 crc kubenswrapper[4730]: I0221 00:07:56.905417 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:56Z","lastTransitionTime":"2026-02-21T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.008236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.008317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.008341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.008372 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.008413 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:57Z","lastTransitionTime":"2026-02-21T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.112754 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.112808 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.112825 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.112849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.112868 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:57Z","lastTransitionTime":"2026-02-21T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.215532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.215587 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.215601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.215621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.215637 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:57Z","lastTransitionTime":"2026-02-21T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.318377 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.318449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.318467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.318490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.318509 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:57Z","lastTransitionTime":"2026-02-21T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.422092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.422147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.422163 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.422184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.422202 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:57Z","lastTransitionTime":"2026-02-21T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.525139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.525202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.525219 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.525244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.525278 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:57Z","lastTransitionTime":"2026-02-21T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.628441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.628513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.628530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.628556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.628573 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:57Z","lastTransitionTime":"2026-02-21T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.659513 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:09:56.35908742 +0000 UTC Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.693264 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:57 crc kubenswrapper[4730]: E0221 00:07:57.693519 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.731473 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.731527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.731545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.731569 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.731711 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:57Z","lastTransitionTime":"2026-02-21T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.835896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.835981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.836000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.836025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.836044 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:57Z","lastTransitionTime":"2026-02-21T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.940060 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.940132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.940149 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.940180 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:57 crc kubenswrapper[4730]: I0221 00:07:57.940198 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:57Z","lastTransitionTime":"2026-02-21T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.044462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.044544 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.044572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.044623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.044648 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:58Z","lastTransitionTime":"2026-02-21T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.148166 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.148225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.148242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.148268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.148288 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:58Z","lastTransitionTime":"2026-02-21T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.251999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.252051 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.252068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.252092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.252110 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:58Z","lastTransitionTime":"2026-02-21T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.354233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.354325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.354346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.354371 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.354389 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:58Z","lastTransitionTime":"2026-02-21T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.457549 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.457639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.457656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.457680 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.457729 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:58Z","lastTransitionTime":"2026-02-21T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.561206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.561271 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.561289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.561313 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.561335 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:58Z","lastTransitionTime":"2026-02-21T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.660495 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 23:47:55.475523838 +0000 UTC Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.663616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.663668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.663685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.663709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.663727 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:58Z","lastTransitionTime":"2026-02-21T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.693147 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.693184 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:07:58 crc kubenswrapper[4730]: E0221 00:07:58.693293 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.693541 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:07:58 crc kubenswrapper[4730]: E0221 00:07:58.693700 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:07:58 crc kubenswrapper[4730]: E0221 00:07:58.693990 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.731654 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.750446 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.767427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.767476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.767496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.767521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.767539 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:58Z","lastTransitionTime":"2026-02-21T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.769991 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.794914 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.831809 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:28Z\\\",\\\"message\\\":\\\"Status{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0221 00:07:28.799603 6376 factory.go:656] Stopping watch factory\\\\nI0221 00:07:28.801626 6376 ovnkube.go:599] Stopped ovnkube\\\\nI0221 00:07:28.797483 6376 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0221 00:07:28.801755 6376 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 00:07:28.801759 6376 services_controller.go:452] Built service openshift-console/downloads per-node LB for network=default: []services.LB{}\\\\nF0221 00:07:28.801987 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.850581 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab89c178-119a-439e-a93f-f515b431ba96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fed08b7ba240311a68caae197a7a3643593c810b92f5c6d10c28584084d920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a544d47c97f806a94461291f56bea6eb69c99909d44bd61b50787beddda269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b411a288944c0c29798f478163a824e3cb80076e49c0e23695b4318f872c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.869564 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.870664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.870696 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.870705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.870721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.870732 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:58Z","lastTransitionTime":"2026-02-21T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.889704 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.909056 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.923854 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.944303 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.965144 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.976378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.976436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.976675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.976744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.976764 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:58Z","lastTransitionTime":"2026-02-21T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:58 crc kubenswrapper[4730]: I0221 00:07:58.993086 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.013686 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.036049 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:46Z\\\",\\\"message\\\":\\\"2026-02-21T00:07:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59\\\\n2026-02-21T00:07:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59 to /host/opt/cni/bin/\\\\n2026-02-21T00:07:01Z [verbose] multus-daemon started\\\\n2026-02-21T00:07:01Z [verbose] Readiness Indicator file check\\\\n2026-02-21T00:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.052193 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.068832 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.080815 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.080865 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.080882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.080909 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.080926 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:59Z","lastTransitionTime":"2026-02-21T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.089829 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.183558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.183899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.184071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.184221 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.184364 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:59Z","lastTransitionTime":"2026-02-21T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.286670 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.286737 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.286755 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.286780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.286799 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:59Z","lastTransitionTime":"2026-02-21T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.389517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.389558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.389570 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.389588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.389600 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:59Z","lastTransitionTime":"2026-02-21T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.495460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.495500 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.495515 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.495534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.495549 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:59Z","lastTransitionTime":"2026-02-21T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.601821 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.601901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.601924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.601988 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.602013 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:59Z","lastTransitionTime":"2026-02-21T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.661253 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:24:07.345872864 +0000 UTC Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.693044 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:07:59 crc kubenswrapper[4730]: E0221 00:07:59.693317 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.694681 4730 scope.go:117] "RemoveContainer" containerID="3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.704788 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.704927 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.705027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.705111 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.705182 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:59Z","lastTransitionTime":"2026-02-21T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.807857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.807900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.807912 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.807928 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.807940 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:59Z","lastTransitionTime":"2026-02-21T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.910082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.910106 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.910114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.910126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:07:59 crc kubenswrapper[4730]: I0221 00:07:59.910136 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:07:59Z","lastTransitionTime":"2026-02-21T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.012710 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.012740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.012749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.012763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.012774 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:00Z","lastTransitionTime":"2026-02-21T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.116429 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.116468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.116478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.116492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.116502 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:00Z","lastTransitionTime":"2026-02-21T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.220012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.220062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.220074 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.220092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.220104 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:00Z","lastTransitionTime":"2026-02-21T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.288056 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/2.log" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.291007 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerStarted","Data":"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9"} Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.291722 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.325254 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.325303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.325325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.325345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.325357 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:00Z","lastTransitionTime":"2026-02-21T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.331537 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.347903 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.362976 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.385260 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.414290 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:28Z\\\",\\\"message\\\":\\\"Status{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0221 00:07:28.799603 6376 factory.go:656] Stopping watch factory\\\\nI0221 00:07:28.801626 6376 ovnkube.go:599] Stopped ovnkube\\\\nI0221 00:07:28.797483 6376 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0221 00:07:28.801755 6376 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 00:07:28.801759 6376 services_controller.go:452] Built service openshift-console/downloads per-node LB for network=default: []services.LB{}\\\\nF0221 00:07:28.801987 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.429068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.429155 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.429178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.429208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.429230 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:00Z","lastTransitionTime":"2026-02-21T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.435824 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab89c178-119a-439e-a93f-f515b431ba96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fed08b7ba240311a68caae197a7a3643593c810b92f5c6d10c28584084d920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a544d47c97f806a94461291f56bea6eb69c99909d44bd61b50787beddda269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b411a288944c0c29798f478163a824e3cb80076e49c0e23695b4318f872c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.457442 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.474049 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.489775 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.508616 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.528613 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.532794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.532832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.532842 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.532861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.532871 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:00Z","lastTransitionTime":"2026-02-21T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.544051 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.566822 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.586208 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.604171 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:46Z\\\",\\\"message\\\":\\\"2026-02-21T00:07:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59\\\\n2026-02-21T00:07:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59 to /host/opt/cni/bin/\\\\n2026-02-21T00:07:01Z [verbose] multus-daemon started\\\\n2026-02-21T00:07:01Z [verbose] Readiness Indicator file check\\\\n2026-02-21T00:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.618536 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.636012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.636108 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.636133 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.636169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.636191 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:00Z","lastTransitionTime":"2026-02-21T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.638847 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.657882 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.661884 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:30:14.28704252 +0000 UTC Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.692444 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.692507 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.692465 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:00 crc kubenswrapper[4730]: E0221 00:08:00.692600 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:00 crc kubenswrapper[4730]: E0221 00:08:00.692707 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:00 crc kubenswrapper[4730]: E0221 00:08:00.692859 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.739297 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.739353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.739364 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.739394 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.739404 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:00Z","lastTransitionTime":"2026-02-21T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.842542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.842618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.842642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.842676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.842699 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:00Z","lastTransitionTime":"2026-02-21T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.946084 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.946195 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.946220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.946250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:00 crc kubenswrapper[4730]: I0221 00:08:00.946273 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:00Z","lastTransitionTime":"2026-02-21T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.050027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.050140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.050159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.050184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.050201 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:01Z","lastTransitionTime":"2026-02-21T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.153975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.154102 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.154120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.154150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.154167 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:01Z","lastTransitionTime":"2026-02-21T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.256770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.256818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.256875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.256902 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.257001 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:01Z","lastTransitionTime":"2026-02-21T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.297544 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/3.log" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.298748 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/2.log" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.303663 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerID="c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9" exitCode=1 Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.303724 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9"} Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.303770 4730 scope.go:117] "RemoveContainer" containerID="3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.304911 4730 scope.go:117] "RemoveContainer" containerID="c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9" Feb 21 00:08:01 crc kubenswrapper[4730]: E0221 00:08:01.305182 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.332594 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.355926 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.361036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.361100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.361121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.361150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.361167 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:01Z","lastTransitionTime":"2026-02-21T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.375984 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.397565 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.412555 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.434076 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.451815 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.463863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.463905 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.463917 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.463932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.463972 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:01Z","lastTransitionTime":"2026-02-21T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.470650 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.489469 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.506247 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:46Z\\\",\\\"message\\\":\\\"2026-02-21T00:07:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59\\\\n2026-02-21T00:07:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59 to /host/opt/cni/bin/\\\\n2026-02-21T00:07:01Z [verbose] multus-daemon started\\\\n2026-02-21T00:07:01Z [verbose] Readiness Indicator file check\\\\n2026-02-21T00:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.518486 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.531843 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.545058 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab89c178-119a-439e-a93f-f515b431ba96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fed08b7ba240311a68caae197a7a3643593c810b92f5c6d10c28584084d920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a544d47c97f806a94461291f56bea6eb69c99909d44bd61b50787beddda269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b411a288944c0c29798f478163a824e3cb80076e49c0e23695b4318f872c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.566873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.566937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.566988 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.567012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.567030 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:01Z","lastTransitionTime":"2026-02-21T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.577697 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.593406 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.607440 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.625672 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.648994 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3291b7acb680ee9239dd02a98b01bbc726817fa76864f6241e0c6466949e02f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:28Z\\\",\\\"message\\\":\\\"Status{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0221 00:07:28.799603 6376 factory.go:656] Stopping watch factory\\\\nI0221 00:07:28.801626 6376 ovnkube.go:599] Stopped ovnkube\\\\nI0221 00:07:28.797483 6376 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0221 00:07:28.801755 6376 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 00:07:28.801759 6376 services_controller.go:452] Built service openshift-console/downloads per-node LB for network=default: []services.LB{}\\\\nF0221 00:07:28.801987 6376 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:08:00Z\\\",\\\"message\\\":\\\"o:656] Stopping watch factory\\\\nI0221 00:08:00.630565 6829 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:08:00.630477 6829 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:08:00.630574 6829 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:08:00.630865 6829 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:08:00.630918 6829 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631050 6829 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631182 6829 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631540 6829 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631563 6829 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631675 6829 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.661998 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:31:42.003490696 +0000 UTC Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.670021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.670052 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.670060 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.670072 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.670081 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:01Z","lastTransitionTime":"2026-02-21T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.692835 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:01 crc kubenswrapper[4730]: E0221 00:08:01.693096 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.772312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.772341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.772351 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.772365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.772375 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:01Z","lastTransitionTime":"2026-02-21T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.874919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.874986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.874996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.875012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.875025 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:01Z","lastTransitionTime":"2026-02-21T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.977915 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.978010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.978029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.978055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:01 crc kubenswrapper[4730]: I0221 00:08:01.978072 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:01Z","lastTransitionTime":"2026-02-21T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.080650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.080698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.080719 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.080750 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.080774 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:02Z","lastTransitionTime":"2026-02-21T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.183484 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.183538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.183555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.183578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.183596 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:02Z","lastTransitionTime":"2026-02-21T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.286899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.287029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.287052 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.287079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.287098 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:02Z","lastTransitionTime":"2026-02-21T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.309827 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/3.log" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.315316 4730 scope.go:117] "RemoveContainer" containerID="c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9" Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.315605 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.332717 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.351912 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.376208 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.390214 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.390297 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.390321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.390353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.390377 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:02Z","lastTransitionTime":"2026-02-21T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.400248 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.422437 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:46Z\\\",\\\"message\\\":\\\"2026-02-21T00:07:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59\\\\n2026-02-21T00:07:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59 to /host/opt/cni/bin/\\\\n2026-02-21T00:07:01Z [verbose] multus-daemon started\\\\n2026-02-21T00:07:01Z [verbose] Readiness Indicator file check\\\\n2026-02-21T00:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.441697 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.459729 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.478839 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab89c178-119a-439e-a93f-f515b431ba96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fed08b7ba240311a68caae197a7a3643593c810b92f5c6d10c28584084d920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a544d47c97f806a94461291f56bea6eb69c99909d44bd61b50787beddda269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b411a288944c0c29798f478163a824e3cb80076e49c0e23695b4318f872c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.493157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.493188 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.493199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.493217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.493230 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:02Z","lastTransitionTime":"2026-02-21T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.511784 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.527656 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.545428 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.553142 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.553307 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.553368 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:06.553297885 +0000 UTC m=+148.564864860 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.553431 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.553522 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.553608 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:09:06.55358129 +0000 UTC m=+148.565148255 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.553642 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.553686 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.553759 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.553797 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.553818 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.553834 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.553860 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.553766 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.553880 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 00:09:06.553863196 +0000 UTC m=+148.565430171 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.553885 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.553923 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 00:09:06.553908577 +0000 UTC m=+148.565475542 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.553983 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 00:09:06.553936157 +0000 UTC m=+148.565503122 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.603467 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.604118 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.604177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.604196 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.604223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.604242 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:02Z","lastTransitionTime":"2026-02-21T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.640697 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:08:00Z\\\",\\\"message\\\":\\\"o:656] Stopping watch factory\\\\nI0221 00:08:00.630565 6829 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:08:00.630477 6829 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:08:00.630574 6829 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:08:00.630865 6829 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:08:00.630918 6829 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631050 6829 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631182 6829 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631540 6829 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631563 6829 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631675 6829 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.662885 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:05:24.57577166 +0000 UTC Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.666868 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.693392 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.693474 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.693591 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.693705 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.693974 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:02 crc kubenswrapper[4730]: E0221 00:08:02.694058 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.707252 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.707318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.707337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.707361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.707382 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:02Z","lastTransitionTime":"2026-02-21T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.710138 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.737103 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.753852 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.765574 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.809968 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.810035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.810048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.810071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.810084 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:02Z","lastTransitionTime":"2026-02-21T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.913426 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.913471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.913488 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.913514 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:02 crc kubenswrapper[4730]: I0221 00:08:02.913531 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:02Z","lastTransitionTime":"2026-02-21T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.016103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.016177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.016203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.016235 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.016256 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:03Z","lastTransitionTime":"2026-02-21T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.120223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.120282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.120300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.120323 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.120342 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:03Z","lastTransitionTime":"2026-02-21T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.224113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.224167 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.224184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.224209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.224227 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:03Z","lastTransitionTime":"2026-02-21T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.327382 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.327491 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.327517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.327557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.327586 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:03Z","lastTransitionTime":"2026-02-21T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.431518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.431596 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.431614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.431644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.431666 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:03Z","lastTransitionTime":"2026-02-21T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.533785 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.533824 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.533833 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.533853 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.533871 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:03Z","lastTransitionTime":"2026-02-21T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.637242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.637307 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.637326 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.637348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.637361 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:03Z","lastTransitionTime":"2026-02-21T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.663473 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 09:34:17.978604828 +0000 UTC Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.693064 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:03 crc kubenswrapper[4730]: E0221 00:08:03.693202 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.740478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.740524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.740533 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.740552 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.740565 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:03Z","lastTransitionTime":"2026-02-21T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.843166 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.843245 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.843264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.843295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.843316 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:03Z","lastTransitionTime":"2026-02-21T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.945653 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.945733 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.945754 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.945797 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:03 crc kubenswrapper[4730]: I0221 00:08:03.945821 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:03Z","lastTransitionTime":"2026-02-21T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.049233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.049309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.049331 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.049356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.049375 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:04Z","lastTransitionTime":"2026-02-21T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.152668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.152714 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.152735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.152759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.152779 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:04Z","lastTransitionTime":"2026-02-21T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.256011 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.256068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.256080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.256101 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.256114 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:04Z","lastTransitionTime":"2026-02-21T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.359160 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.359245 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.359271 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.359307 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.359332 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:04Z","lastTransitionTime":"2026-02-21T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.462279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.462322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.462337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.462353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.462365 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:04Z","lastTransitionTime":"2026-02-21T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.565095 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.565157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.565174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.565200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.565218 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:04Z","lastTransitionTime":"2026-02-21T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.663956 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:31:06.094187094 +0000 UTC Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.668190 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.668248 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.668264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.668291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.668333 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:04Z","lastTransitionTime":"2026-02-21T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.694336 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.694361 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.694382 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:04 crc kubenswrapper[4730]: E0221 00:08:04.694565 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:04 crc kubenswrapper[4730]: E0221 00:08:04.694652 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:04 crc kubenswrapper[4730]: E0221 00:08:04.694713 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.771162 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.771250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.771272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.771300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.771322 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:04Z","lastTransitionTime":"2026-02-21T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.874790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.874838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.874847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.874864 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.874874 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:04Z","lastTransitionTime":"2026-02-21T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.977430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.977466 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.977476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.977492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:04 crc kubenswrapper[4730]: I0221 00:08:04.977504 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:04Z","lastTransitionTime":"2026-02-21T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.081348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.081421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.081441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.081466 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.081485 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:05Z","lastTransitionTime":"2026-02-21T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.184538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.184617 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.184639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.184669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.184693 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:05Z","lastTransitionTime":"2026-02-21T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.286649 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.286715 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.286734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.286758 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.286777 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:05Z","lastTransitionTime":"2026-02-21T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.390137 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.390236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.390259 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.390288 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.390311 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:05Z","lastTransitionTime":"2026-02-21T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.492896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.493009 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.493028 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.493058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.493075 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:05Z","lastTransitionTime":"2026-02-21T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.595521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.596128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.596568 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.596961 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.597127 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:05Z","lastTransitionTime":"2026-02-21T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.665138 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:17:44.998995025 +0000 UTC Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.693098 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:05 crc kubenswrapper[4730]: E0221 00:08:05.693297 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.700561 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.700640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.700665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.700695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.700722 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:05Z","lastTransitionTime":"2026-02-21T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.802801 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.802860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.802880 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.802904 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.802922 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:05Z","lastTransitionTime":"2026-02-21T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.905818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.905879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.905896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.905919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:05 crc kubenswrapper[4730]: I0221 00:08:05.905935 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:05Z","lastTransitionTime":"2026-02-21T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.009147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.009214 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.009230 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.009254 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.009274 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.055500 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.055565 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.055591 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.055623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.055676 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: E0221 00:08:06.078659 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.084547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.084621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.084646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.084677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.084700 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: E0221 00:08:06.104861 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.110860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.111139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.111361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.111580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.111877 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: E0221 00:08:06.131705 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.138343 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.138395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.138413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.138480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.138499 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: E0221 00:08:06.166047 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.171856 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.171979 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.172003 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.172037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.172060 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: E0221 00:08:06.194175 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:06 crc kubenswrapper[4730]: E0221 00:08:06.194405 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.196828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.196877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.196895 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.196922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.196972 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.300755 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.300829 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.300846 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.300873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.300891 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.404874 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.404927 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.405117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.405183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.405204 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.508849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.508931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.508992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.509023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.509041 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.611994 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.612039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.612052 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.612073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.612087 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.667124 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:12:52.575219433 +0000 UTC Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.692685 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.692806 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.693021 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:06 crc kubenswrapper[4730]: E0221 00:08:06.693242 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:06 crc kubenswrapper[4730]: E0221 00:08:06.693391 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:06 crc kubenswrapper[4730]: E0221 00:08:06.693552 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.715513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.715585 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.715614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.715646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.715674 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.818858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.818980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.819016 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.819049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.819070 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.922154 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.922219 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.922237 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.922261 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:06 crc kubenswrapper[4730]: I0221 00:08:06.922279 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:06Z","lastTransitionTime":"2026-02-21T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.025389 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.025439 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.025450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.025469 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.025483 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:07Z","lastTransitionTime":"2026-02-21T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.129066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.129127 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.129145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.129170 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.129189 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:07Z","lastTransitionTime":"2026-02-21T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.232924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.233021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.233040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.233063 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.233081 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:07Z","lastTransitionTime":"2026-02-21T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.335106 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.335173 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.335196 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.335225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.335247 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:07Z","lastTransitionTime":"2026-02-21T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.439120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.439184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.439201 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.439231 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.439251 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:07Z","lastTransitionTime":"2026-02-21T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.541370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.541594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.541616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.541641 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.541660 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:07Z","lastTransitionTime":"2026-02-21T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.645685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.645767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.645781 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.645805 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.645819 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:07Z","lastTransitionTime":"2026-02-21T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.683441 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:16:28.816828098 +0000 UTC Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.692869 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:07 crc kubenswrapper[4730]: E0221 00:08:07.693132 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.748220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.748297 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.748320 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.748350 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.748373 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:07Z","lastTransitionTime":"2026-02-21T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.850747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.850844 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.850862 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.850887 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.850904 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:07Z","lastTransitionTime":"2026-02-21T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.954113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.954172 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.954185 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.954209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:07 crc kubenswrapper[4730]: I0221 00:08:07.954224 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:07Z","lastTransitionTime":"2026-02-21T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.057327 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.057403 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.057421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.057447 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.057465 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:08Z","lastTransitionTime":"2026-02-21T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.160998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.161056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.161073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.161097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.161116 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:08Z","lastTransitionTime":"2026-02-21T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.269042 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.269134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.269155 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.269181 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.269207 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:08Z","lastTransitionTime":"2026-02-21T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.372355 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.372416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.372435 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.372461 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.372480 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:08Z","lastTransitionTime":"2026-02-21T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.476155 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.476757 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.476912 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.477103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.477277 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:08Z","lastTransitionTime":"2026-02-21T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.580578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.580651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.580676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.580704 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.580722 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:08Z","lastTransitionTime":"2026-02-21T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.683840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.684792 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.683798 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:03:32.696307694 +0000 UTC Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.684984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.685071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.685097 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:08Z","lastTransitionTime":"2026-02-21T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.692278 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.692503 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:08 crc kubenswrapper[4730]: E0221 00:08:08.692485 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.692548 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:08 crc kubenswrapper[4730]: E0221 00:08:08.692773 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:08 crc kubenswrapper[4730]: E0221 00:08:08.693546 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.713494 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab89c178-119a-439e-a93f-f515b431ba96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fed08b7ba240311a68caae197a7a3643593c810b92f5c6d10c28584084d920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a544d47c97f806a94461291f56bea6eb69c99909d44bd61b50787beddda269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b411a288944c0c29798f478163a824e3cb80076e49c0e23695b4318f872c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.749324 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.767446 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.783412 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.787587 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.787676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.787697 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.787720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.787736 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:08Z","lastTransitionTime":"2026-02-21T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.805981 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.837868 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:08:00Z\\\",\\\"message\\\":\\\"o:656] Stopping watch factory\\\\nI0221 00:08:00.630565 6829 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:08:00.630477 6829 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:08:00.630574 6829 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:08:00.630865 6829 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:08:00.630918 6829 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631050 6829 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631182 6829 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631540 6829 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631563 6829 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631675 6829 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.855831 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.875534 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.890301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.890331 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.890339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.890353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.890362 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:08Z","lastTransitionTime":"2026-02-21T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.892438 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.909250 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.919421 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.936496 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.951782 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.967636 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.985491 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.993794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.993846 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.993863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.993886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:08 crc kubenswrapper[4730]: I0221 00:08:08.993905 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:08Z","lastTransitionTime":"2026-02-21T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.005920 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:46Z\\\",\\\"message\\\":\\\"2026-02-21T00:07:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59\\\\n2026-02-21T00:07:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59 to /host/opt/cni/bin/\\\\n2026-02-21T00:07:01Z [verbose] multus-daemon started\\\\n2026-02-21T00:07:01Z [verbose] Readiness Indicator file check\\\\n2026-02-21T00:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.024157 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.041232 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.097120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.097193 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.097217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.097247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.097268 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:09Z","lastTransitionTime":"2026-02-21T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.200122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.200197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.200217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.200240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.200257 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:09Z","lastTransitionTime":"2026-02-21T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.303202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.303253 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.303272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.303312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.303332 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:09Z","lastTransitionTime":"2026-02-21T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.406460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.406516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.406529 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.406548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.406561 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:09Z","lastTransitionTime":"2026-02-21T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.509910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.510070 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.510098 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.510132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.510154 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:09Z","lastTransitionTime":"2026-02-21T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.612578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.612641 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.612665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.612694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.612716 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:09Z","lastTransitionTime":"2026-02-21T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.685764 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:55:00.391295831 +0000 UTC Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.693267 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:09 crc kubenswrapper[4730]: E0221 00:08:09.693603 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.707513 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.716318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.716415 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.716435 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.716517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.716579 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:09Z","lastTransitionTime":"2026-02-21T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.820290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.820356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.820378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.820406 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.820428 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:09Z","lastTransitionTime":"2026-02-21T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.923085 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.923121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.923132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.923148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:09 crc kubenswrapper[4730]: I0221 00:08:09.923162 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:09Z","lastTransitionTime":"2026-02-21T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.026560 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.026628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.026651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.026682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.026709 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:10Z","lastTransitionTime":"2026-02-21T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.129783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.129871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.129897 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.129929 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.130010 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:10Z","lastTransitionTime":"2026-02-21T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.232761 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.232811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.232829 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.232852 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.232871 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:10Z","lastTransitionTime":"2026-02-21T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.335373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.335446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.335464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.335489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.335506 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:10Z","lastTransitionTime":"2026-02-21T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.438417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.438473 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.438490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.438513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.438530 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:10Z","lastTransitionTime":"2026-02-21T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.541790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.541858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.541875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.541902 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.541919 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:10Z","lastTransitionTime":"2026-02-21T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.645034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.645098 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.645116 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.645143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.645165 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:10Z","lastTransitionTime":"2026-02-21T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.687022 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:38:31.19231306 +0000 UTC Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.692333 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.692382 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:10 crc kubenswrapper[4730]: E0221 00:08:10.692508 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.692596 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:10 crc kubenswrapper[4730]: E0221 00:08:10.692756 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:10 crc kubenswrapper[4730]: E0221 00:08:10.692867 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.747217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.747271 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.747284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.747304 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.747317 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:10Z","lastTransitionTime":"2026-02-21T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.850194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.850250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.850315 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.850339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.850400 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:10Z","lastTransitionTime":"2026-02-21T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.953620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.953708 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.953725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.953762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:10 crc kubenswrapper[4730]: I0221 00:08:10.953785 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:10Z","lastTransitionTime":"2026-02-21T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.056799 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.056871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.056890 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.056916 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.056933 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:11Z","lastTransitionTime":"2026-02-21T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.160752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.160822 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.160843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.160871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.160893 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:11Z","lastTransitionTime":"2026-02-21T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.264032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.264089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.264105 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.264124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.264136 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:11Z","lastTransitionTime":"2026-02-21T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.366939 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.366996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.367005 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.367018 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.367028 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:11Z","lastTransitionTime":"2026-02-21T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.470280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.470317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.470329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.470347 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.470360 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:11Z","lastTransitionTime":"2026-02-21T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.573099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.573185 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.573203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.573227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.573243 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:11Z","lastTransitionTime":"2026-02-21T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.677250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.677282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.677290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.677303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.677312 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:11Z","lastTransitionTime":"2026-02-21T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.687836 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:53:01.680342554 +0000 UTC Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.693073 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:11 crc kubenswrapper[4730]: E0221 00:08:11.693191 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.779934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.780030 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.780048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.780075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.780094 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:11Z","lastTransitionTime":"2026-02-21T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.882341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.882442 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.882548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.882585 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.882611 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:11Z","lastTransitionTime":"2026-02-21T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.986149 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.986211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.986229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.986253 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:11 crc kubenswrapper[4730]: I0221 00:08:11.986271 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:11Z","lastTransitionTime":"2026-02-21T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.088724 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.088791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.088802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.088820 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.088831 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:12Z","lastTransitionTime":"2026-02-21T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.191686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.191728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.191743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.191762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.191775 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:12Z","lastTransitionTime":"2026-02-21T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.300426 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.300490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.300508 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.300532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.300556 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:12Z","lastTransitionTime":"2026-02-21T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.435532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.435599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.435623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.435649 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.435669 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:12Z","lastTransitionTime":"2026-02-21T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.539327 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.539404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.539425 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.539455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.539476 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:12Z","lastTransitionTime":"2026-02-21T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.642162 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.642200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.642212 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.642229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.642239 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:12Z","lastTransitionTime":"2026-02-21T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.688442 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:23:56.271575302 +0000 UTC Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.692833 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.692898 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:12 crc kubenswrapper[4730]: E0221 00:08:12.693087 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.693207 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:12 crc kubenswrapper[4730]: E0221 00:08:12.693376 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:12 crc kubenswrapper[4730]: E0221 00:08:12.694118 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.694646 4730 scope.go:117] "RemoveContainer" containerID="c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9" Feb 21 00:08:12 crc kubenswrapper[4730]: E0221 00:08:12.694912 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.744416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.744522 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.744550 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.744585 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.744609 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:12Z","lastTransitionTime":"2026-02-21T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.847974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.848642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.848661 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.848687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.848708 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:12Z","lastTransitionTime":"2026-02-21T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.951649 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.951704 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.951722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.951744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:12 crc kubenswrapper[4730]: I0221 00:08:12.951762 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:12Z","lastTransitionTime":"2026-02-21T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.055496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.055548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.055566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.055588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.055605 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:13Z","lastTransitionTime":"2026-02-21T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.158752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.158809 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.158828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.158849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.158866 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:13Z","lastTransitionTime":"2026-02-21T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.262085 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.262153 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.262178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.262206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.262226 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:13Z","lastTransitionTime":"2026-02-21T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.365559 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.365625 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.365636 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.365655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.365668 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:13Z","lastTransitionTime":"2026-02-21T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.468707 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.468764 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.468781 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.468807 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.468823 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:13Z","lastTransitionTime":"2026-02-21T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.572073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.572133 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.572146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.572170 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.572185 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:13Z","lastTransitionTime":"2026-02-21T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.674730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.674776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.674787 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.674804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.674814 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:13Z","lastTransitionTime":"2026-02-21T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.689130 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:02:05.301316674 +0000 UTC Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.692402 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:13 crc kubenswrapper[4730]: E0221 00:08:13.692528 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.777838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.777929 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.777997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.778021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.778038 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:13Z","lastTransitionTime":"2026-02-21T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.881314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.881363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.881374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.881394 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.881408 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:13Z","lastTransitionTime":"2026-02-21T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.984433 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.984498 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.984515 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.984541 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:13 crc kubenswrapper[4730]: I0221 00:08:13.984562 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:13Z","lastTransitionTime":"2026-02-21T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.087838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.087909 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.087932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.087994 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.088017 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:14Z","lastTransitionTime":"2026-02-21T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.191649 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.191818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.191842 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.191866 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.191914 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:14Z","lastTransitionTime":"2026-02-21T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.295310 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.295362 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.295377 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.295398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.295414 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:14Z","lastTransitionTime":"2026-02-21T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.398393 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.398501 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.398565 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.398596 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.398659 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:14Z","lastTransitionTime":"2026-02-21T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.505411 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.505466 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.505482 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.505506 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.505522 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:14Z","lastTransitionTime":"2026-02-21T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.609176 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.609230 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.609246 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.609268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.609285 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:14Z","lastTransitionTime":"2026-02-21T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.690111 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:57:51.740504392 +0000 UTC Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.693173 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.693215 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:14 crc kubenswrapper[4730]: E0221 00:08:14.693391 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.693505 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:14 crc kubenswrapper[4730]: E0221 00:08:14.693742 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:14 crc kubenswrapper[4730]: E0221 00:08:14.693810 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.712000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.712063 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.712082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.712109 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.712129 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:14Z","lastTransitionTime":"2026-02-21T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.815332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.815365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.815373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.815384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.815393 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:14Z","lastTransitionTime":"2026-02-21T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.917706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.917745 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.917756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.917772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:14 crc kubenswrapper[4730]: I0221 00:08:14.917783 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:14Z","lastTransitionTime":"2026-02-21T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.020261 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.020324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.020342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.020366 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.020385 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:15Z","lastTransitionTime":"2026-02-21T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.124396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.124510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.124534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.124563 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.124584 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:15Z","lastTransitionTime":"2026-02-21T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.227421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.227485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.227502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.227527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.227544 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:15Z","lastTransitionTime":"2026-02-21T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.332454 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.332513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.332530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.332553 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.332570 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:15Z","lastTransitionTime":"2026-02-21T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.435726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.435792 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.435818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.435848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.435870 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:15Z","lastTransitionTime":"2026-02-21T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.538978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.539044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.539066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.539123 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.539145 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:15Z","lastTransitionTime":"2026-02-21T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.642563 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.642637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.642654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.642680 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.642698 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:15Z","lastTransitionTime":"2026-02-21T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.691301 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:39:44.772539811 +0000 UTC Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.692555 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:15 crc kubenswrapper[4730]: E0221 00:08:15.692743 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.746305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.746369 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.746389 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.746413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.746432 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:15Z","lastTransitionTime":"2026-02-21T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.849662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.849722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.849739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.849765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.849781 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:15Z","lastTransitionTime":"2026-02-21T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.952253 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.952351 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.952377 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.952408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:15 crc kubenswrapper[4730]: I0221 00:08:15.952431 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:15Z","lastTransitionTime":"2026-02-21T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.054889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.054932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.054957 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.054973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.054983 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.157669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.157714 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.157732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.157753 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.157769 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.217800 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.217837 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.217850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.217866 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.217878 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: E0221 00:08:16.238007 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.242554 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.242622 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.242642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.242669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.242689 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: E0221 00:08:16.258315 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.261731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.261774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.261784 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.261798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.261808 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: E0221 00:08:16.272535 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.276177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.276217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.276228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.276243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.276254 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: E0221 00:08:16.286177 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.289588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.289643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.289657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.289681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.289696 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: E0221 00:08:16.305121 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:16 crc kubenswrapper[4730]: E0221 00:08:16.305258 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.306982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.307009 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.307017 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.307048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.307058 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.409900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.409982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.410002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.410020 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.410055 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.513131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.513214 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.513238 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.513269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.513292 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.616145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.616212 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.616227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.616247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.616263 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.691745 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:37:50.208312328 +0000 UTC Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.693097 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:16 crc kubenswrapper[4730]: E0221 00:08:16.693277 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.693364 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:16 crc kubenswrapper[4730]: E0221 00:08:16.693567 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.693763 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:16 crc kubenswrapper[4730]: E0221 00:08:16.693918 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.719371 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.719443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.719463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.719485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.719502 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.822606 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.822846 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.822871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.822896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.822922 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.925731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.925797 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.925819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.925848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:16 crc kubenswrapper[4730]: I0221 00:08:16.925870 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:16Z","lastTransitionTime":"2026-02-21T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.028269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.028327 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.028348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.028387 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.028407 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:17Z","lastTransitionTime":"2026-02-21T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.131264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.131298 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.131309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.131324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.131334 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:17Z","lastTransitionTime":"2026-02-21T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.234471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.234532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.234542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.234561 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.234573 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:17Z","lastTransitionTime":"2026-02-21T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.327301 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:17 crc kubenswrapper[4730]: E0221 00:08:17.327410 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:08:17 crc kubenswrapper[4730]: E0221 00:08:17.327857 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs podName:bcf7c949-7646-4b97-9ffa-bf019455ed07 nodeName:}" failed. No retries permitted until 2026-02-21 00:09:21.32784166 +0000 UTC m=+163.339408595 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs") pod "network-metrics-daemon-snhft" (UID: "bcf7c949-7646-4b97-9ffa-bf019455ed07") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.336379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.336414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.336425 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.336440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.336453 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:17Z","lastTransitionTime":"2026-02-21T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.438578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.438617 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.438626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.438639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.438648 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:17Z","lastTransitionTime":"2026-02-21T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.542004 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.542067 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.542088 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.542113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.542162 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:17Z","lastTransitionTime":"2026-02-21T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.645048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.645098 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.645115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.645167 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.645184 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:17Z","lastTransitionTime":"2026-02-21T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.692163 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:09:37.234102169 +0000 UTC Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.692231 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:17 crc kubenswrapper[4730]: E0221 00:08:17.692383 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.748608 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.748665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.748682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.748703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.748719 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:17Z","lastTransitionTime":"2026-02-21T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.851851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.851932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.851986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.852020 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.852040 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:17Z","lastTransitionTime":"2026-02-21T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.955238 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.955329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.955352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.955380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:17 crc kubenswrapper[4730]: I0221 00:08:17.955400 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:17Z","lastTransitionTime":"2026-02-21T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.058973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.059036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.059055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.059079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.059096 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:18Z","lastTransitionTime":"2026-02-21T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.162521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.162619 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.162655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.162690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.162713 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:18Z","lastTransitionTime":"2026-02-21T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.265611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.265673 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.265692 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.265718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.265735 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:18Z","lastTransitionTime":"2026-02-21T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.369476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.369527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.369545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.369568 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.369584 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:18Z","lastTransitionTime":"2026-02-21T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.472583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.472617 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.472626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.472639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.472648 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:18Z","lastTransitionTime":"2026-02-21T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.575115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.575151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.575159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.575172 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.575180 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:18Z","lastTransitionTime":"2026-02-21T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.677999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.678035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.678047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.678063 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.678075 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:18Z","lastTransitionTime":"2026-02-21T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.692888 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.692920 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:12:28.210384362 +0000 UTC Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.693027 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:18 crc kubenswrapper[4730]: E0221 00:08:18.693079 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.693239 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:18 crc kubenswrapper[4730]: E0221 00:08:18.693339 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:18 crc kubenswrapper[4730]: E0221 00:08:18.693503 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.709736 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.726514 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.749128 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.778962 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:08:00Z\\\",\\\"message\\\":\\\"o:656] Stopping watch factory\\\\nI0221 00:08:00.630565 6829 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:08:00.630477 6829 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:08:00.630574 6829 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:08:00.630865 6829 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:08:00.630918 6829 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631050 6829 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631182 6829 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631540 6829 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631563 6829 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631675 6829 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.780150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.780177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.780186 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.780199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.780210 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:18Z","lastTransitionTime":"2026-02-21T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.796557 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab89c178-119a-439e-a93f-f515b431ba96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fed08b7ba240311a68caae197a7a3643593c810b92f5c6d10c28584084d920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a544d47c97f806a94461291f56bea6eb69c99909d44bd61b50787beddda269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b411a288944c0c29798f478163a824e3cb80076e49c0e23695b4318f872c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.830293 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.852654 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.871087 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.883513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.883567 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.883584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.883604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.883618 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:18Z","lastTransitionTime":"2026-02-21T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.885015 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.900935 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.914574 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.930109 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.945792 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.958428 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a89256f-4201-4857-9697-c2a6e6bf6ed4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a4186f767d15fe95a75c03b53e279537c7e7d534cffec8f37ddab3962dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecbced28fb6567dd86d0e98d25258c52dea061de5510e1ddab1c8a944507e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ecbced28fb6567dd86d0e98d25258c52dea061de5510e1ddab1c8a944507e96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.969667 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.983728 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.986180 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.986231 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.986240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.986256 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.986265 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:18Z","lastTransitionTime":"2026-02-21T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:18 crc kubenswrapper[4730]: I0221 00:08:18.996596 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:18Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.015125 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.031980 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:46Z\\\",\\\"message\\\":\\\"2026-02-21T00:07:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59\\\\n2026-02-21T00:07:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59 to /host/opt/cni/bin/\\\\n2026-02-21T00:07:01Z [verbose] multus-daemon started\\\\n2026-02-21T00:07:01Z [verbose] Readiness Indicator file check\\\\n2026-02-21T00:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.088993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.089026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.089037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.089053 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.089066 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:19Z","lastTransitionTime":"2026-02-21T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.192095 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.192457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.192606 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.192762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.192902 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:19Z","lastTransitionTime":"2026-02-21T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.296097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.296172 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.296192 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.296216 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.296233 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:19Z","lastTransitionTime":"2026-02-21T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.397976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.398035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.398054 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.398077 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.398093 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:19Z","lastTransitionTime":"2026-02-21T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.501249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.501284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.501295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.501336 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.501349 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:19Z","lastTransitionTime":"2026-02-21T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.603819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.603882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.603899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.603926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.603988 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:19Z","lastTransitionTime":"2026-02-21T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.692449 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:19 crc kubenswrapper[4730]: E0221 00:08:19.692640 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.693516 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 00:39:17.674891031 +0000 UTC Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.706910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.707179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.707323 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.707474 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.707611 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:19Z","lastTransitionTime":"2026-02-21T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.809777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.809815 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.809826 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.809840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.809854 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:19Z","lastTransitionTime":"2026-02-21T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.912104 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.912142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.912151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.912165 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:19 crc kubenswrapper[4730]: I0221 00:08:19.912175 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:19Z","lastTransitionTime":"2026-02-21T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.014764 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.015121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.015287 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.015456 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.015588 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:20Z","lastTransitionTime":"2026-02-21T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.119019 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.119351 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.119516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.119679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.119821 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:20Z","lastTransitionTime":"2026-02-21T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.226615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.226820 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.227023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.227179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.227316 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:20Z","lastTransitionTime":"2026-02-21T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.330151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.330177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.330185 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.330197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.330206 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:20Z","lastTransitionTime":"2026-02-21T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.433085 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.433339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.433476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.433620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.433761 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:20Z","lastTransitionTime":"2026-02-21T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.537131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.537492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.537650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.537799 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.537936 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:20Z","lastTransitionTime":"2026-02-21T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.640814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.641196 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.641346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.641497 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.641617 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:20Z","lastTransitionTime":"2026-02-21T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.700336 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.700336 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.700114 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 08:27:59.09380656 +0000 UTC Feb 21 00:08:20 crc kubenswrapper[4730]: E0221 00:08:20.700720 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.700847 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:20 crc kubenswrapper[4730]: E0221 00:08:20.701043 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:20 crc kubenswrapper[4730]: E0221 00:08:20.701276 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.744071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.744139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.744163 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.744191 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.744211 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:20Z","lastTransitionTime":"2026-02-21T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.847456 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.847540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.847561 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.847591 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.847616 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:20Z","lastTransitionTime":"2026-02-21T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.950802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.950881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.950901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.950927 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:20 crc kubenswrapper[4730]: I0221 00:08:20.950994 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:20Z","lastTransitionTime":"2026-02-21T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.053290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.053367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.053390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.053419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.053445 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:21Z","lastTransitionTime":"2026-02-21T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.156829 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.156881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.156897 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.156921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.156937 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:21Z","lastTransitionTime":"2026-02-21T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.260269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.260329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.260380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.260403 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.260419 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:21Z","lastTransitionTime":"2026-02-21T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.363382 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.363442 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.363463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.363493 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.363514 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:21Z","lastTransitionTime":"2026-02-21T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.466889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.466930 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.466938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.466970 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.466981 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:21Z","lastTransitionTime":"2026-02-21T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.570064 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.570128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.570145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.570168 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.570187 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:21Z","lastTransitionTime":"2026-02-21T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.673527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.673558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.673566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.673579 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.673587 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:21Z","lastTransitionTime":"2026-02-21T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.693100 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:21 crc kubenswrapper[4730]: E0221 00:08:21.693266 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.701291 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:54:57.290097835 +0000 UTC Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.776690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.776791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.776812 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.776833 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.776893 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:21Z","lastTransitionTime":"2026-02-21T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.880014 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.880082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.880096 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.880115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.880127 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:21Z","lastTransitionTime":"2026-02-21T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.983148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.983201 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.983217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.983241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:21 crc kubenswrapper[4730]: I0221 00:08:21.983258 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:21Z","lastTransitionTime":"2026-02-21T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.086201 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.086262 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.086278 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.086301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.086318 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:22Z","lastTransitionTime":"2026-02-21T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.189101 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.189155 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.189177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.189211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.189234 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:22Z","lastTransitionTime":"2026-02-21T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.291473 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.291511 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.291520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.291535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.291546 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:22Z","lastTransitionTime":"2026-02-21T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.394888 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.394996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.395017 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.395043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.395061 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:22Z","lastTransitionTime":"2026-02-21T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.497911 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.498003 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.498024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.498047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.498063 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:22Z","lastTransitionTime":"2026-02-21T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.600830 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.600871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.600882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.600897 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.600908 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:22Z","lastTransitionTime":"2026-02-21T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.692301 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.692347 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.692571 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:22 crc kubenswrapper[4730]: E0221 00:08:22.692770 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:22 crc kubenswrapper[4730]: E0221 00:08:22.692972 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:22 crc kubenswrapper[4730]: E0221 00:08:22.693100 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.701414 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 09:59:00.639609306 +0000 UTC Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.703012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.703076 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.703099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.703128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.703150 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:22Z","lastTransitionTime":"2026-02-21T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.806037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.806083 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.806095 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.806112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.806123 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:22Z","lastTransitionTime":"2026-02-21T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.908976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.909034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.909055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.909087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:22 crc kubenswrapper[4730]: I0221 00:08:22.909110 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:22Z","lastTransitionTime":"2026-02-21T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.012527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.012574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.012589 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.012609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.012623 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:23Z","lastTransitionTime":"2026-02-21T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.116011 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.116062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.116078 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.116099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.116116 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:23Z","lastTransitionTime":"2026-02-21T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.220093 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.220169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.220187 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.220216 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.220235 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:23Z","lastTransitionTime":"2026-02-21T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.322812 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.322853 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.322865 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.322882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.322895 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:23Z","lastTransitionTime":"2026-02-21T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.425596 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.425645 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.425657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.425674 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.425687 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:23Z","lastTransitionTime":"2026-02-21T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.531749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.531836 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.531861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.531890 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.531907 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:23Z","lastTransitionTime":"2026-02-21T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.634613 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.634654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.634668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.634687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.634702 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:23Z","lastTransitionTime":"2026-02-21T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.693233 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.694308 4730 scope.go:117] "RemoveContainer" containerID="c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9" Feb 21 00:08:23 crc kubenswrapper[4730]: E0221 00:08:23.694357 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:23 crc kubenswrapper[4730]: E0221 00:08:23.694601 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.702251 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:48:48.519998741 +0000 UTC Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.738450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.738506 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.738528 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.738552 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.738569 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:23Z","lastTransitionTime":"2026-02-21T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.841590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.841694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.841709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.841729 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.841745 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:23Z","lastTransitionTime":"2026-02-21T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.944075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.944116 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.944126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.944141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:23 crc kubenswrapper[4730]: I0221 00:08:23.944151 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:23Z","lastTransitionTime":"2026-02-21T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.046301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.046339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.046347 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.046360 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.046368 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:24Z","lastTransitionTime":"2026-02-21T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.148806 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.148847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.148861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.148877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.148891 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:24Z","lastTransitionTime":"2026-02-21T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.251559 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.251620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.251638 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.251662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.251678 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:24Z","lastTransitionTime":"2026-02-21T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.354231 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.354277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.354287 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.354302 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.354312 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:24Z","lastTransitionTime":"2026-02-21T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.457084 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.457119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.457128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.457145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.457158 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:24Z","lastTransitionTime":"2026-02-21T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.558628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.558669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.558677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.558693 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.558703 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:24Z","lastTransitionTime":"2026-02-21T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.660988 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.661029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.661040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.661056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.661064 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:24Z","lastTransitionTime":"2026-02-21T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.692495 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.692591 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:24 crc kubenswrapper[4730]: E0221 00:08:24.692642 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:24 crc kubenswrapper[4730]: E0221 00:08:24.692805 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.692591 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:24 crc kubenswrapper[4730]: E0221 00:08:24.693036 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.703204 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 01:42:26.125716139 +0000 UTC Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.763628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.763688 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.763704 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.763773 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.763791 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:24Z","lastTransitionTime":"2026-02-21T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.867168 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.867237 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.867258 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.867289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.867310 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:24Z","lastTransitionTime":"2026-02-21T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.970042 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.970116 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.970136 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.970161 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:24 crc kubenswrapper[4730]: I0221 00:08:24.970177 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:24Z","lastTransitionTime":"2026-02-21T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.073244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.073301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.073317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.073337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.073354 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:25Z","lastTransitionTime":"2026-02-21T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.176203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.176286 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.176312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.176349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.176388 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:25Z","lastTransitionTime":"2026-02-21T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.279522 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.279580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.279597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.279620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.279637 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:25Z","lastTransitionTime":"2026-02-21T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.381906 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.381967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.381981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.381998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.382008 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:25Z","lastTransitionTime":"2026-02-21T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.485099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.485144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.485156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.485174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.485187 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:25Z","lastTransitionTime":"2026-02-21T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.588238 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.588288 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.588298 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.588316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.588328 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:25Z","lastTransitionTime":"2026-02-21T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.691470 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.691544 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.691567 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.691591 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.691633 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:25Z","lastTransitionTime":"2026-02-21T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.692267 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:25 crc kubenswrapper[4730]: E0221 00:08:25.692447 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.703876 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:07:42.965846263 +0000 UTC Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.793735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.793802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.793824 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.793850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.793872 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:25Z","lastTransitionTime":"2026-02-21T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.897573 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.897638 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.897656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.897682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:25 crc kubenswrapper[4730]: I0221 00:08:25.897701 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:25Z","lastTransitionTime":"2026-02-21T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.001643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.001711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.001729 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.001757 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.001777 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.105029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.105093 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.105110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.105140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.105156 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.208353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.208421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.208446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.208473 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.208496 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.312006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.312081 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.312102 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.312131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.312151 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.415745 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.415799 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.415811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.415830 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.415846 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.520025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.520100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.520124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.520157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.520183 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.529871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.529924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.529935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.529971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.529987 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: E0221 00:08:26.547145 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.552821 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.552900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.552919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.552972 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.552992 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: E0221 00:08:26.573238 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.579727 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.579800 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.579819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.579849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.579868 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: E0221 00:08:26.602299 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.608583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.608642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.608660 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.608690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.608709 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: E0221 00:08:26.630249 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.636012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.636099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.636119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.636153 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.636177 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: E0221 00:08:26.653884 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T00:08:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c2d16590-4847-4316-a218-c611e1dabc66\\\",\\\"systemUUID\\\":\\\"47786c44-bba6-409d-9771-9e2e16f93f54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:26 crc kubenswrapper[4730]: E0221 00:08:26.654173 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.656557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.656608 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.656626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.656651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.656673 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.693238 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.693322 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.693373 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:26 crc kubenswrapper[4730]: E0221 00:08:26.693539 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:26 crc kubenswrapper[4730]: E0221 00:08:26.693687 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:26 crc kubenswrapper[4730]: E0221 00:08:26.693933 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.704458 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:23:57.728746158 +0000 UTC Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.759615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.759678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.759699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.759732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.759759 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.863058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.863134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.863159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.863199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.863230 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.967190 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.967336 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.967362 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.967397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:26 crc kubenswrapper[4730]: I0221 00:08:26.967420 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:26Z","lastTransitionTime":"2026-02-21T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.071232 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.071283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.071294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.071315 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.071330 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:27Z","lastTransitionTime":"2026-02-21T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.174737 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.174803 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.174818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.174839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.174856 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:27Z","lastTransitionTime":"2026-02-21T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.277611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.277646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.277655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.277678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.277687 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:27Z","lastTransitionTime":"2026-02-21T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.380881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.380923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.380932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.380963 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.380973 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:27Z","lastTransitionTime":"2026-02-21T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.483695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.483739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.483752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.483772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.483786 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:27Z","lastTransitionTime":"2026-02-21T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.585847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.585900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.585919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.585978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.586003 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:27Z","lastTransitionTime":"2026-02-21T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.688411 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.688471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.688492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.688524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.688548 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:27Z","lastTransitionTime":"2026-02-21T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.692913 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:27 crc kubenswrapper[4730]: E0221 00:08:27.693114 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.705100 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:07:20.392866956 +0000 UTC Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.791236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.791290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.791307 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.791350 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.791361 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:27Z","lastTransitionTime":"2026-02-21T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.893510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.893542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.893551 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.893564 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.893572 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:27Z","lastTransitionTime":"2026-02-21T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.996484 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.996540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.996557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.996581 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:27 crc kubenswrapper[4730]: I0221 00:08:27.996598 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:27Z","lastTransitionTime":"2026-02-21T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.099584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.099634 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.099647 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.099671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.099683 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:28Z","lastTransitionTime":"2026-02-21T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.210049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.210119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.210136 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.210158 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.210178 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:28Z","lastTransitionTime":"2026-02-21T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.314644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.314730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.314753 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.314784 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.314805 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:28Z","lastTransitionTime":"2026-02-21T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.417360 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.417430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.417444 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.417467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.417480 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:28Z","lastTransitionTime":"2026-02-21T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.520288 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.520364 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.520378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.520409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.520424 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:28Z","lastTransitionTime":"2026-02-21T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.623716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.623786 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.623805 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.623828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.623845 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:28Z","lastTransitionTime":"2026-02-21T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.693085 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.693186 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.693085 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:28 crc kubenswrapper[4730]: E0221 00:08:28.693332 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:28 crc kubenswrapper[4730]: E0221 00:08:28.693578 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:28 crc kubenswrapper[4730]: E0221 00:08:28.693692 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.706101 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:27:22.138288708 +0000 UTC Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.712931 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a89256f-4201-4857-9697-c2a6e6bf6ed4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b1a4186f767d15fe95a75c03b53e279537c7e7d534cffec8f37ddab3962dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ecbced28fb6567dd86d0e98d25258c52dea061de5510e1ddab1c8a944507e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ecbced28fb6567dd86d0e98d25258c52dea061de5510e1ddab1c8a944507e96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.727380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.727440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.727464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.727494 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.727515 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:28Z","lastTransitionTime":"2026-02-21T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.735148 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.758104 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8e339028d5282de5b54368b4a8933b7412b6010450ee28fb083fd3350a7e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.784416 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4614e743ad793ec041a12da48225068c70f2d776b87274860b573c42d42cce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1e1577cff1e0d229c71ef4655ade462ff7782305e1e0f68ecbe587603a3fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.810652 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36b7d0c1-8747-4652-a573-2e8f0e1cb6e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0221 00:06:52.601970 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 00:06:52.607360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3357490903/tls.crt::/tmp/serving-cert-3357490903/tls.key\\\\\\\"\\\\nI0221 00:06:58.121731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 00:06:58.127468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 00:06:58.127486 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 00:06:58.127510 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 00:06:58.127515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 00:06:58.138828 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 00:06:58.138872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 00:06:58.138890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 00:06:58.138897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 00:06:58.138903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 00:06:58.138909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0221 00:06:58.138841 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0221 00:06:58.140341 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.830906 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.830995 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.831017 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.831048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.831070 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:28Z","lastTransitionTime":"2026-02-21T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.832857 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gsndg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900f07ef-9762-49ec-9551-41a6ce12659d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:07:46Z\\\",\\\"message\\\":\\\"2026-02-21T00:07:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59\\\\n2026-02-21T00:07:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b8e0ff4e-14c6-4306-9c33-fda24139dd59 to /host/opt/cni/bin/\\\\n2026-02-21T00:07:01Z [verbose] multus-daemon started\\\\n2026-02-21T00:07:01Z [verbose] Readiness Indicator file check\\\\n2026-02-21T00:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6htjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gsndg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.852158 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7745d1e7-d559-4eb3-97cf-870e01ade14d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://189ee2a2bd554be2750565da2a51a8b3125e0349acfc20e347a182d3fb521f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7992654dbfdd930f25623bdf005cc82cf4562f94a02aa5e51e5996380c25417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qjjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zqwvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.870532 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-snhft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf7c949-7646-4b97-9ffa-bf019455ed07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dqkbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:07:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-snhft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.891996 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab89c178-119a-439e-a93f-f515b431ba96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fed08b7ba240311a68caae197a7a3643593c810b92f5c6d10c28584084d920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a544d47c97f806a94461291f56bea6eb69c99909d44bd61b50787beddda269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b411a288944c0c29798f478163a824e3cb80076e49c0e23695b4318f872c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04ed0496b6e8c67fac61e047a297ee0c9ba23efc5b98d7525956b6194336a04f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.925235 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5cc76f5-08a2-45a3-977c-4cae0bf4afcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b01d9f5469c5a8435dd76dd95c960bf5fbde9e1fbdebef7237defe23698e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f9978e22000af24129a2d76d4b16ae23320a557dbe1176a45652f5e7b3f827f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c8f107bb3485593146359e4e396ae81b85e7ddc9f2c8e51b61ef0e0799c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8957388ef716654d78f44ffa622c269c684d6ce42e5461d3c9ec69336714b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da46b176edf8bab4853404d2a26aba321cd2ca213bf6bfdc602ba3a059afea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa8243969b9c8d0d33a0529a30541fcb820b7c2a0b6c65e61d9e00521ac8e722\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dac36cf01c6b0b4222e814bd475b2fced9ce5beadaac876e19435c2c65b3ea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92739e69e613ebe66d3cf93b8a99ff7973276ae965595b92dfe2b155eab6f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.933734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.933809 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.933828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.933858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.933877 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:28Z","lastTransitionTime":"2026-02-21T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.941450 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwggg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"805246ab-eb54-4142-bdc1-cd658cfb3615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ee1b74cb255ef83a6087a3d16bb11cb0f9339b3d0fef170b47b8d61cdaa230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvjfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwggg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.959547 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7622a560-9120-4202-b95a-246a806fe889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a41848600fe3193d1041b04ee9ca4759ee192ecfe0467e3a3e34a2d97f5d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77k74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plgd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:28 crc kubenswrapper[4730]: I0221 00:08:28.980116 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v58rm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed585257-5535-4eb9-9a7c-81081bdae051\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc710ae593aa5ba6cb16069047f8953212b8e3f4df83665c2170e825ad132c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc188841c26935e42d1285421a8fbe5d41948eecb86dc575731a79f0dda1e852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b676e25f876dd1863fac21319d7eea68fad796ec21760c4540c21071cb961bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://860371ba3f7012c9501f1157fded15dccffaa1d56f919844668e82b4ba8d7123\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b42fc5fd99fe1158b9335c40319624ee90948d6965b5f045f8a40ff851d1cf8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9da537ffd08580b087ec4fa2fe830ffd883242fa64f860e46c687aec09db548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c1e085a6edac96155c5e7f735698d627ebd9488692b5ebd75ea4fb02c5e73e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rl2s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v58rm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:28Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.009379 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6272ef5-e657-4f64-a217-305dddfe36cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T00:08:00Z\\\",\\\"message\\\":\\\"o:656] Stopping watch factory\\\\nI0221 00:08:00.630565 6829 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:08:00.630477 6829 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 00:08:00.630574 6829 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 00:08:00.630865 6829 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0221 00:08:00.630918 6829 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631050 6829 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631182 6829 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631540 6829 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631563 6829 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 00:08:00.631675 6829 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T00:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kp9wk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.029685 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"507c78aa-ad5a-4dd1-b6c9-7dc74465b786\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c226fd5f7a1fa4868a23e487326ff9e1e49e211db44b5e3a29c173221db2c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a7aee925d7e8b7f71d482ba2bbe6b09281939484a0366c658e1c3c7052be4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://535bd0f467313ad3eaddaecb6ecbb6fcb872acaeb1eca33ae0f4eea2828abc8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.036411 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.036467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.036487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.036513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.036531 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:29Z","lastTransitionTime":"2026-02-21T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.051597 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffcb6090531d8fd66ad5209235cfc6875567400e4093d25b5a41e772d1e93d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.075333 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.094471 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.110106 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbr58" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7c446ed-2321-4ed4-a768-17e71bc811ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954399e0d98dfc63b806a464b0ab60404939c60c6a73a00645bfb0eed32ae42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T00:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-566pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T00:06:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbr58\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.140348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.140396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.140405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.140421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.140438 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:29Z","lastTransitionTime":"2026-02-21T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.244272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.244348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.244367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.244399 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.244424 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:29Z","lastTransitionTime":"2026-02-21T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.348076 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.348148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.348167 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.348192 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.348211 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:29Z","lastTransitionTime":"2026-02-21T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.451090 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.451147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.451163 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.451190 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.451208 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:29Z","lastTransitionTime":"2026-02-21T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.554690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.554767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.554801 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.554832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.554855 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:29Z","lastTransitionTime":"2026-02-21T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.657615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.657690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.657715 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.657747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.657771 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:29Z","lastTransitionTime":"2026-02-21T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.692975 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:29 crc kubenswrapper[4730]: E0221 00:08:29.693331 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.706483 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:01:18.013053275 +0000 UTC Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.760764 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.760826 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.760846 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.760876 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.760897 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:29Z","lastTransitionTime":"2026-02-21T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.863986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.864034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.864045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.864063 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.864078 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:29Z","lastTransitionTime":"2026-02-21T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.966725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.967371 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.967601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.967781 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:29 crc kubenswrapper[4730]: I0221 00:08:29.967997 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:29Z","lastTransitionTime":"2026-02-21T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.071332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.071393 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.071411 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.071440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.071461 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:30Z","lastTransitionTime":"2026-02-21T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.174723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.174810 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.174834 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.174862 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.174882 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:30Z","lastTransitionTime":"2026-02-21T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.277571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.277648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.277668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.277693 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.277714 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:30Z","lastTransitionTime":"2026-02-21T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.380649 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.381138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.381245 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.381350 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.381454 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:30Z","lastTransitionTime":"2026-02-21T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.485313 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.485395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.485420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.485450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.485470 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:30Z","lastTransitionTime":"2026-02-21T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.588360 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.588413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.588425 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.588446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.588460 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:30Z","lastTransitionTime":"2026-02-21T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.692091 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.692273 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.692322 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:30 crc kubenswrapper[4730]: E0221 00:08:30.692654 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.692521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.692701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.692717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.692728 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:30Z","lastTransitionTime":"2026-02-21T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:30 crc kubenswrapper[4730]: E0221 00:08:30.692778 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.692360 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:30 crc kubenswrapper[4730]: E0221 00:08:30.692865 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.707720 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:08:11.159532721 +0000 UTC Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.796821 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.796903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.796921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.796974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.796995 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:30Z","lastTransitionTime":"2026-02-21T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.899389 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.899430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.899438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.899452 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:30 crc kubenswrapper[4730]: I0221 00:08:30.899462 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:30Z","lastTransitionTime":"2026-02-21T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.001452 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.001512 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.001529 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.001557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.001574 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:31Z","lastTransitionTime":"2026-02-21T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.104402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.104470 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.104489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.104515 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.104533 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:31Z","lastTransitionTime":"2026-02-21T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.207471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.207517 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.207531 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.207553 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.207594 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:31Z","lastTransitionTime":"2026-02-21T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.309855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.309997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.310023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.310053 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.310075 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:31Z","lastTransitionTime":"2026-02-21T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.413102 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.413191 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.413210 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.413239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.413262 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:31Z","lastTransitionTime":"2026-02-21T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.516547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.516597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.516613 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.516634 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.516651 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:31Z","lastTransitionTime":"2026-02-21T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.619827 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.619867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.619877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.619898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.619911 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:31Z","lastTransitionTime":"2026-02-21T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.692275 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:31 crc kubenswrapper[4730]: E0221 00:08:31.692438 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.708760 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:13:33.413171665 +0000 UTC Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.722882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.722916 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.722924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.722937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.722967 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:31Z","lastTransitionTime":"2026-02-21T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.825706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.825757 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.825770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.825789 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.825804 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:31Z","lastTransitionTime":"2026-02-21T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.928433 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.928479 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.928488 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.928503 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:31 crc kubenswrapper[4730]: I0221 00:08:31.928512 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:31Z","lastTransitionTime":"2026-02-21T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.031502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.031579 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.031604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.031642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.031666 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:32Z","lastTransitionTime":"2026-02-21T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.135264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.135322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.135335 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.135351 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.135399 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:32Z","lastTransitionTime":"2026-02-21T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.238658 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.238721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.238738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.238761 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.238777 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:32Z","lastTransitionTime":"2026-02-21T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.341813 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.341928 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.341994 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.342034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.342058 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:32Z","lastTransitionTime":"2026-02-21T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.445269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.445344 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.445361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.445392 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.445412 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:32Z","lastTransitionTime":"2026-02-21T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.548847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.548918 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.548936 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.549046 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.549069 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:32Z","lastTransitionTime":"2026-02-21T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.652761 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.652844 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.652861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.652890 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.652911 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:32Z","lastTransitionTime":"2026-02-21T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.692568 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.692708 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.692569 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:32 crc kubenswrapper[4730]: E0221 00:08:32.692782 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:32 crc kubenswrapper[4730]: E0221 00:08:32.693320 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:32 crc kubenswrapper[4730]: E0221 00:08:32.693450 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.709537 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:38:59.352693082 +0000 UTC Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.756525 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.756592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.756609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.756635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.756654 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:32Z","lastTransitionTime":"2026-02-21T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.860036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.860102 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.860121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.860146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.860173 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:32Z","lastTransitionTime":"2026-02-21T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.964125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.964177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.964187 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.964204 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:32 crc kubenswrapper[4730]: I0221 00:08:32.964214 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:32Z","lastTransitionTime":"2026-02-21T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.066249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.066303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.066317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.066338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.066349 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:33Z","lastTransitionTime":"2026-02-21T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.169135 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.169186 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.169205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.169229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.169247 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:33Z","lastTransitionTime":"2026-02-21T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.272968 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.273023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.273041 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.273066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.273084 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:33Z","lastTransitionTime":"2026-02-21T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.375859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.375925 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.375975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.376001 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.376018 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:33Z","lastTransitionTime":"2026-02-21T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.428466 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gsndg_900f07ef-9762-49ec-9551-41a6ce12659d/kube-multus/1.log" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.429075 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gsndg_900f07ef-9762-49ec-9551-41a6ce12659d/kube-multus/0.log" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.429130 4730 generic.go:334] "Generic (PLEG): container finished" podID="900f07ef-9762-49ec-9551-41a6ce12659d" containerID="510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833" exitCode=1 Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.429160 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gsndg" event={"ID":"900f07ef-9762-49ec-9551-41a6ce12659d","Type":"ContainerDied","Data":"510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833"} Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.429194 4730 scope.go:117] "RemoveContainer" containerID="8186721b7172155e290e27f224845bd29beb3e5e49bd872c0f1576e7267fcd5c" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.429613 4730 scope.go:117] "RemoveContainer" containerID="510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833" Feb 21 00:08:33 crc kubenswrapper[4730]: E0221 00:08:33.429787 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gsndg_openshift-multus(900f07ef-9762-49ec-9551-41a6ce12659d)\"" pod="openshift-multus/multus-gsndg" podUID="900f07ef-9762-49ec-9551-41a6ce12659d" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.480218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.480843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.481021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.481180 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.481384 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:33Z","lastTransitionTime":"2026-02-21T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.494635 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=95.494605242 podStartE2EDuration="1m35.494605242s" podCreationTimestamp="2026-02-21 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:08:33.476984356 +0000 UTC m=+115.488551311" watchObservedRunningTime="2026-02-21 00:08:33.494605242 +0000 UTC m=+115.506172197" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.522513 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zqwvk" podStartSLOduration=94.522491799 podStartE2EDuration="1m34.522491799s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:08:33.510793837 +0000 UTC m=+115.522360782" watchObservedRunningTime="2026-02-21 00:08:33.522491799 +0000 UTC m=+115.534058755" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.569985 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.569932512 podStartE2EDuration="1m1.569932512s" podCreationTimestamp="2026-02-21 00:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:08:33.569608095 +0000 UTC m=+115.581175030" watchObservedRunningTime="2026-02-21 00:08:33.569932512 +0000 UTC m=+115.581499467" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.583509 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.583557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.583571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.583601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.583616 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:33Z","lastTransitionTime":"2026-02-21T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.594802 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=94.594784956 podStartE2EDuration="1m34.594784956s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:08:33.594028801 +0000 UTC m=+115.605595736" watchObservedRunningTime="2026-02-21 00:08:33.594784956 +0000 UTC m=+115.606351891" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.607690 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-rwggg" podStartSLOduration=95.607674854 podStartE2EDuration="1m35.607674854s" podCreationTimestamp="2026-02-21 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:08:33.606869977 +0000 UTC m=+115.618436912" watchObservedRunningTime="2026-02-21 00:08:33.607674854 +0000 UTC m=+115.619241789" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.620891 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podStartSLOduration=95.620876837 podStartE2EDuration="1m35.620876837s" podCreationTimestamp="2026-02-21 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:08:33.619833855 +0000 UTC m=+115.631400790" watchObservedRunningTime="2026-02-21 00:08:33.620876837 +0000 UTC m=+115.632443772" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.665481 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-v58rm" podStartSLOduration=94.665460801 podStartE2EDuration="1m34.665460801s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:08:33.644545697 +0000 UTC m=+115.656112632" watchObservedRunningTime="2026-02-21 00:08:33.665460801 +0000 UTC m=+115.677027736" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.666006 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=93.666000211 podStartE2EDuration="1m33.666000211s" podCreationTimestamp="2026-02-21 00:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:08:33.6654303 +0000 UTC m=+115.676997235" watchObservedRunningTime="2026-02-21 00:08:33.666000211 +0000 UTC m=+115.677567146" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.685286 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.685317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.685327 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.685339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.685347 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:33Z","lastTransitionTime":"2026-02-21T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.692633 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:33 crc kubenswrapper[4730]: E0221 00:08:33.692903 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.711257 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 01:29:14.16388685 +0000 UTC Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.722568 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lbr58" podStartSLOduration=95.722553513 podStartE2EDuration="1m35.722553513s" podCreationTimestamp="2026-02-21 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:08:33.721834068 +0000 UTC m=+115.733401003" watchObservedRunningTime="2026-02-21 00:08:33.722553513 +0000 UTC m=+115.734120448" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.736606 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=24.736586454 podStartE2EDuration="24.736586454s" podCreationTimestamp="2026-02-21 00:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:08:33.735617144 +0000 UTC m=+115.747184079" watchObservedRunningTime="2026-02-21 00:08:33.736586454 +0000 UTC m=+115.748153399" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.789777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.789819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.789835 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.789855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.789873 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:33Z","lastTransitionTime":"2026-02-21T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.892610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.892649 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.892658 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.892675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.892685 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:33Z","lastTransitionTime":"2026-02-21T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.995524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.995562 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.995570 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.995584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:33 crc kubenswrapper[4730]: I0221 00:08:33.995593 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:33Z","lastTransitionTime":"2026-02-21T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.098168 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.098274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.098298 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.098344 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.098382 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:34Z","lastTransitionTime":"2026-02-21T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.204107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.204142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.204151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.204166 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.204175 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:34Z","lastTransitionTime":"2026-02-21T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.306489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.306539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.306555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.306578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.306597 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:34Z","lastTransitionTime":"2026-02-21T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.410475 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.410547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.410568 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.410592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.410610 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:34Z","lastTransitionTime":"2026-02-21T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.436730 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gsndg_900f07ef-9762-49ec-9551-41a6ce12659d/kube-multus/1.log" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.513819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.513874 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.513890 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.513910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.513922 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:34Z","lastTransitionTime":"2026-02-21T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.616450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.616534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.616549 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.616569 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.616583 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:34Z","lastTransitionTime":"2026-02-21T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.693276 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.693409 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:34 crc kubenswrapper[4730]: E0221 00:08:34.693515 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.693293 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:34 crc kubenswrapper[4730]: E0221 00:08:34.693652 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:34 crc kubenswrapper[4730]: E0221 00:08:34.693776 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.712053 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:41:00.134002585 +0000 UTC Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.719042 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.719103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.719123 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.719151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.719173 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:34Z","lastTransitionTime":"2026-02-21T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.823086 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.823138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.823148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.823164 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.823174 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:34Z","lastTransitionTime":"2026-02-21T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.926217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.926300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.926322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.926358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:34 crc kubenswrapper[4730]: I0221 00:08:34.926379 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:34Z","lastTransitionTime":"2026-02-21T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.029931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.030017 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.030034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.030058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.030076 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:35Z","lastTransitionTime":"2026-02-21T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.132608 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.132657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.132675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.132699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.132716 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:35Z","lastTransitionTime":"2026-02-21T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.235512 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.235572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.235588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.235613 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.235633 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:35Z","lastTransitionTime":"2026-02-21T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.338914 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.339016 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.339032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.339107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.339154 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:35Z","lastTransitionTime":"2026-02-21T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.442107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.442176 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.442193 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.442216 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.442237 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:35Z","lastTransitionTime":"2026-02-21T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.545142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.545217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.545240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.545276 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.545297 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:35Z","lastTransitionTime":"2026-02-21T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.649381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.649469 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.649489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.649513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.649531 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:35Z","lastTransitionTime":"2026-02-21T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.693102 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:35 crc kubenswrapper[4730]: E0221 00:08:35.693704 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.712985 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:42:04.978094082 +0000 UTC Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.753033 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.753111 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.753134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.753158 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.753175 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:35Z","lastTransitionTime":"2026-02-21T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.861568 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.861610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.861619 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.861634 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.861645 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:35Z","lastTransitionTime":"2026-02-21T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.964311 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.964373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.964389 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.964417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:35 crc kubenswrapper[4730]: I0221 00:08:35.964435 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:35Z","lastTransitionTime":"2026-02-21T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.067324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.067393 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.067420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.067451 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.067476 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:36Z","lastTransitionTime":"2026-02-21T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.169658 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.169698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.169711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.169730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.169742 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:36Z","lastTransitionTime":"2026-02-21T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.272167 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.272211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.272234 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.272255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.272269 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:36Z","lastTransitionTime":"2026-02-21T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.374324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.374356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.374364 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.374376 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.374384 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:36Z","lastTransitionTime":"2026-02-21T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.477216 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.477276 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.477296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.477320 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.477338 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:36Z","lastTransitionTime":"2026-02-21T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.580486 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.580540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.580556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.580579 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.580600 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:36Z","lastTransitionTime":"2026-02-21T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.682623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.682675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.682691 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.682715 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.682733 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:36Z","lastTransitionTime":"2026-02-21T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.692585 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.692625 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.692646 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:36 crc kubenswrapper[4730]: E0221 00:08:36.692736 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:36 crc kubenswrapper[4730]: E0221 00:08:36.692849 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:36 crc kubenswrapper[4730]: E0221 00:08:36.693214 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.693734 4730 scope.go:117] "RemoveContainer" containerID="c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9" Feb 21 00:08:36 crc kubenswrapper[4730]: E0221 00:08:36.694062 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kp9wk_openshift-ovn-kubernetes(c6272ef5-e657-4f64-a217-305dddfe36cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.713367 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:03:46.825699032 +0000 UTC Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.785427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.785484 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.785503 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.785531 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.785554 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:36Z","lastTransitionTime":"2026-02-21T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.836545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.836612 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.836630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.836654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.836670 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T00:08:36Z","lastTransitionTime":"2026-02-21T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.897821 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk"] Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.898570 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.902638 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.903456 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.903628 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 21 00:08:36 crc kubenswrapper[4730]: I0221 00:08:36.903832 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.048684 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e7568e3e-04e7-4efa-b8bf-e3be003060f1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.048874 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7568e3e-04e7-4efa-b8bf-e3be003060f1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.048930 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e7568e3e-04e7-4efa-b8bf-e3be003060f1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.049545 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e7568e3e-04e7-4efa-b8bf-e3be003060f1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.049805 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7568e3e-04e7-4efa-b8bf-e3be003060f1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.150826 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e7568e3e-04e7-4efa-b8bf-e3be003060f1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.150900 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7568e3e-04e7-4efa-b8bf-e3be003060f1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.150929 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e7568e3e-04e7-4efa-b8bf-e3be003060f1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.150983 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e7568e3e-04e7-4efa-b8bf-e3be003060f1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.151075 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7568e3e-04e7-4efa-b8bf-e3be003060f1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.151105 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e7568e3e-04e7-4efa-b8bf-e3be003060f1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.151183 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e7568e3e-04e7-4efa-b8bf-e3be003060f1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.154010 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e7568e3e-04e7-4efa-b8bf-e3be003060f1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.161797 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7568e3e-04e7-4efa-b8bf-e3be003060f1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.169625 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7568e3e-04e7-4efa-b8bf-e3be003060f1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kk4pk\" (UID: \"e7568e3e-04e7-4efa-b8bf-e3be003060f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.230219 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.450210 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" event={"ID":"e7568e3e-04e7-4efa-b8bf-e3be003060f1","Type":"ContainerStarted","Data":"54d32df58b706fa7e4e3c68a1314de2f5cb6023d948d72bc82edd64166af8ae8"} Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.450659 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" event={"ID":"e7568e3e-04e7-4efa-b8bf-e3be003060f1","Type":"ContainerStarted","Data":"cdc45dbc17453a0156dd8a45babc009a3a930f0f8c7dc916d1a090b9e8fde64b"} Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.473352 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kk4pk" podStartSLOduration=99.473307699 podStartE2EDuration="1m39.473307699s" podCreationTimestamp="2026-02-21 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:08:37.472138456 +0000 UTC m=+119.483705431" watchObservedRunningTime="2026-02-21 00:08:37.473307699 +0000 UTC m=+119.484874674" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.692698 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:37 crc kubenswrapper[4730]: E0221 00:08:37.692886 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.714068 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:05:37.654849851 +0000 UTC Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.714132 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 21 00:08:37 crc kubenswrapper[4730]: I0221 00:08:37.728515 4730 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 21 00:08:38 crc kubenswrapper[4730]: E0221 00:08:38.652205 4730 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 21 00:08:38 crc kubenswrapper[4730]: I0221 00:08:38.692895 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:38 crc kubenswrapper[4730]: I0221 00:08:38.692931 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:38 crc kubenswrapper[4730]: E0221 00:08:38.693786 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:38 crc kubenswrapper[4730]: I0221 00:08:38.693840 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:38 crc kubenswrapper[4730]: E0221 00:08:38.694130 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:38 crc kubenswrapper[4730]: E0221 00:08:38.694189 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:38 crc kubenswrapper[4730]: E0221 00:08:38.817457 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 00:08:39 crc kubenswrapper[4730]: I0221 00:08:39.692992 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:39 crc kubenswrapper[4730]: E0221 00:08:39.693122 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:40 crc kubenswrapper[4730]: I0221 00:08:40.692611 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:40 crc kubenswrapper[4730]: I0221 00:08:40.692737 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:40 crc kubenswrapper[4730]: I0221 00:08:40.692625 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:40 crc kubenswrapper[4730]: E0221 00:08:40.692930 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:40 crc kubenswrapper[4730]: E0221 00:08:40.693125 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:40 crc kubenswrapper[4730]: E0221 00:08:40.693306 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:41 crc kubenswrapper[4730]: I0221 00:08:41.693004 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:41 crc kubenswrapper[4730]: E0221 00:08:41.693207 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:42 crc kubenswrapper[4730]: I0221 00:08:42.692766 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:42 crc kubenswrapper[4730]: E0221 00:08:42.692988 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:42 crc kubenswrapper[4730]: I0221 00:08:42.693003 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:42 crc kubenswrapper[4730]: E0221 00:08:42.693174 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:42 crc kubenswrapper[4730]: I0221 00:08:42.693900 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:42 crc kubenswrapper[4730]: E0221 00:08:42.694262 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:43 crc kubenswrapper[4730]: I0221 00:08:43.692763 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:43 crc kubenswrapper[4730]: E0221 00:08:43.693066 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:43 crc kubenswrapper[4730]: I0221 00:08:43.693690 4730 scope.go:117] "RemoveContainer" containerID="510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833" Feb 21 00:08:43 crc kubenswrapper[4730]: E0221 00:08:43.818657 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 00:08:44 crc kubenswrapper[4730]: I0221 00:08:44.477069 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gsndg_900f07ef-9762-49ec-9551-41a6ce12659d/kube-multus/1.log" Feb 21 00:08:44 crc kubenswrapper[4730]: I0221 00:08:44.477127 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gsndg" event={"ID":"900f07ef-9762-49ec-9551-41a6ce12659d","Type":"ContainerStarted","Data":"0b650c7255c3c155b3ce35f6ad60891b9b04293ed0b8791fd3e24881b2f2c55a"} Feb 21 00:08:44 crc kubenswrapper[4730]: I0221 00:08:44.693163 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:44 crc kubenswrapper[4730]: I0221 00:08:44.693197 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:44 crc kubenswrapper[4730]: E0221 00:08:44.693310 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:44 crc kubenswrapper[4730]: I0221 00:08:44.693352 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:44 crc kubenswrapper[4730]: E0221 00:08:44.693536 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:44 crc kubenswrapper[4730]: E0221 00:08:44.693653 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:45 crc kubenswrapper[4730]: I0221 00:08:45.692938 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:45 crc kubenswrapper[4730]: E0221 00:08:45.693171 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:46 crc kubenswrapper[4730]: I0221 00:08:46.694458 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:46 crc kubenswrapper[4730]: I0221 00:08:46.694557 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:46 crc kubenswrapper[4730]: I0221 00:08:46.694740 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:46 crc kubenswrapper[4730]: E0221 00:08:46.694760 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:46 crc kubenswrapper[4730]: E0221 00:08:46.694900 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:46 crc kubenswrapper[4730]: E0221 00:08:46.695064 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:47 crc kubenswrapper[4730]: I0221 00:08:47.693578 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:47 crc kubenswrapper[4730]: E0221 00:08:47.693787 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:48 crc kubenswrapper[4730]: I0221 00:08:48.693246 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:48 crc kubenswrapper[4730]: E0221 00:08:48.695696 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:48 crc kubenswrapper[4730]: I0221 00:08:48.695718 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:48 crc kubenswrapper[4730]: E0221 00:08:48.696134 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:48 crc kubenswrapper[4730]: I0221 00:08:48.695756 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:48 crc kubenswrapper[4730]: E0221 00:08:48.696367 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:48 crc kubenswrapper[4730]: E0221 00:08:48.819533 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 00:08:49 crc kubenswrapper[4730]: I0221 00:08:49.693074 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:49 crc kubenswrapper[4730]: E0221 00:08:49.693260 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:50 crc kubenswrapper[4730]: I0221 00:08:50.693002 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:50 crc kubenswrapper[4730]: I0221 00:08:50.693121 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:50 crc kubenswrapper[4730]: E0221 00:08:50.693215 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:50 crc kubenswrapper[4730]: I0221 00:08:50.693353 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:50 crc kubenswrapper[4730]: E0221 00:08:50.693541 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:50 crc kubenswrapper[4730]: E0221 00:08:50.693651 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:51 crc kubenswrapper[4730]: I0221 00:08:51.693080 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:51 crc kubenswrapper[4730]: E0221 00:08:51.693830 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:51 crc kubenswrapper[4730]: I0221 00:08:51.694167 4730 scope.go:117] "RemoveContainer" containerID="c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9" Feb 21 00:08:52 crc kubenswrapper[4730]: I0221 00:08:52.507373 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/3.log" Feb 21 00:08:52 crc kubenswrapper[4730]: I0221 00:08:52.509892 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerStarted","Data":"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0"} Feb 21 00:08:52 crc kubenswrapper[4730]: I0221 00:08:52.510285 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:08:52 crc kubenswrapper[4730]: I0221 00:08:52.535859 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gsndg" podStartSLOduration=114.535843439 podStartE2EDuration="1m54.535843439s" podCreationTimestamp="2026-02-21 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:08:44.499595982 +0000 UTC m=+126.511162967" watchObservedRunningTime="2026-02-21 00:08:52.535843439 +0000 UTC m=+134.547410364" Feb 21 00:08:52 crc kubenswrapper[4730]: I0221 00:08:52.693295 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:52 crc kubenswrapper[4730]: I0221 00:08:52.693376 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:52 crc kubenswrapper[4730]: I0221 00:08:52.693468 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:52 crc kubenswrapper[4730]: E0221 00:08:52.693653 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:52 crc kubenswrapper[4730]: E0221 00:08:52.693787 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:52 crc kubenswrapper[4730]: E0221 00:08:52.693977 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:52 crc kubenswrapper[4730]: I0221 00:08:52.712471 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podStartSLOduration=113.712448916 podStartE2EDuration="1m53.712448916s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:08:52.536907834 +0000 UTC m=+134.548474769" watchObservedRunningTime="2026-02-21 00:08:52.712448916 +0000 UTC m=+134.724015851" Feb 21 00:08:52 crc kubenswrapper[4730]: I0221 00:08:52.713069 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-snhft"] Feb 21 00:08:52 crc kubenswrapper[4730]: I0221 00:08:52.713159 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:52 crc kubenswrapper[4730]: E0221 00:08:52.713420 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:53 crc kubenswrapper[4730]: E0221 00:08:53.820286 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 00:08:54 crc kubenswrapper[4730]: I0221 00:08:54.692980 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:54 crc kubenswrapper[4730]: I0221 00:08:54.693030 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:54 crc kubenswrapper[4730]: E0221 00:08:54.693141 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:54 crc kubenswrapper[4730]: I0221 00:08:54.693319 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:54 crc kubenswrapper[4730]: I0221 00:08:54.693356 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:54 crc kubenswrapper[4730]: E0221 00:08:54.693455 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:54 crc kubenswrapper[4730]: E0221 00:08:54.693616 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:54 crc kubenswrapper[4730]: E0221 00:08:54.693764 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:56 crc kubenswrapper[4730]: I0221 00:08:56.692934 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:56 crc kubenswrapper[4730]: I0221 00:08:56.693050 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:56 crc kubenswrapper[4730]: I0221 00:08:56.693112 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:56 crc kubenswrapper[4730]: I0221 00:08:56.693007 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:56 crc kubenswrapper[4730]: E0221 00:08:56.693166 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:56 crc kubenswrapper[4730]: E0221 00:08:56.693236 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:56 crc kubenswrapper[4730]: E0221 00:08:56.693342 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:56 crc kubenswrapper[4730]: E0221 00:08:56.693460 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:08:58 crc kubenswrapper[4730]: I0221 00:08:58.693117 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:08:58 crc kubenswrapper[4730]: I0221 00:08:58.693117 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:08:58 crc kubenswrapper[4730]: I0221 00:08:58.693163 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:08:58 crc kubenswrapper[4730]: I0221 00:08:58.693130 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:08:58 crc kubenswrapper[4730]: E0221 00:08:58.695240 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 00:08:58 crc kubenswrapper[4730]: E0221 00:08:58.695379 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 00:08:58 crc kubenswrapper[4730]: E0221 00:08:58.695491 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 00:08:58 crc kubenswrapper[4730]: E0221 00:08:58.695560 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-snhft" podUID="bcf7c949-7646-4b97-9ffa-bf019455ed07" Feb 21 00:09:00 crc kubenswrapper[4730]: I0221 00:09:00.692803 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:09:00 crc kubenswrapper[4730]: I0221 00:09:00.692842 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:09:00 crc kubenswrapper[4730]: I0221 00:09:00.693184 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:09:00 crc kubenswrapper[4730]: I0221 00:09:00.693669 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:09:00 crc kubenswrapper[4730]: I0221 00:09:00.695382 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 21 00:09:00 crc kubenswrapper[4730]: I0221 00:09:00.696495 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 21 00:09:00 crc kubenswrapper[4730]: I0221 00:09:00.697277 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 21 00:09:00 crc kubenswrapper[4730]: I0221 00:09:00.697647 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 21 00:09:00 crc kubenswrapper[4730]: I0221 00:09:00.697915 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 21 00:09:00 crc kubenswrapper[4730]: I0221 00:09:00.698081 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 21 00:09:06 crc kubenswrapper[4730]: I0221 00:09:06.563570 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:06 crc kubenswrapper[4730]: E0221 00:09:06.563821 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:11:08.563788437 +0000 UTC m=+270.575355382 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:06 crc kubenswrapper[4730]: I0221 00:09:06.564028 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:09:06 crc kubenswrapper[4730]: I0221 00:09:06.564070 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:09:06 crc kubenswrapper[4730]: I0221 00:09:06.564109 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:09:06 crc kubenswrapper[4730]: I0221 00:09:06.564149 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:09:06 crc kubenswrapper[4730]: I0221 00:09:06.566171 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:09:06 crc kubenswrapper[4730]: I0221 00:09:06.572514 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:09:06 crc kubenswrapper[4730]: I0221 00:09:06.572528 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:09:06 crc kubenswrapper[4730]: I0221 00:09:06.572800 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:09:06 crc kubenswrapper[4730]: I0221 00:09:06.719374 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 00:09:06 crc kubenswrapper[4730]: I0221 00:09:06.733989 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:09:06 crc kubenswrapper[4730]: I0221 00:09:06.762684 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 00:09:06 crc kubenswrapper[4730]: W0221 00:09:06.976396 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-dfc2bb781d2116e2b982c9810a833452a51b9c98d97f6937c2491c698e51ba1e WatchSource:0}: Error finding container dfc2bb781d2116e2b982c9810a833452a51b9c98d97f6937c2491c698e51ba1e: Status 404 returned error can't find the container with id dfc2bb781d2116e2b982c9810a833452a51b9c98d97f6937c2491c698e51ba1e Feb 21 00:09:07 crc kubenswrapper[4730]: W0221 00:09:07.136246 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-9aec8997b4f9423980cd215fcd3f273e738f7052d598401e8ce72ca83350e21b WatchSource:0}: Error finding container 9aec8997b4f9423980cd215fcd3f273e738f7052d598401e8ce72ca83350e21b: Status 404 returned error can't find the container with id 9aec8997b4f9423980cd215fcd3f273e738f7052d598401e8ce72ca83350e21b Feb 21 00:09:07 crc kubenswrapper[4730]: W0221 00:09:07.138632 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-cc0624c512051c4322bcc827a847f4339bb47f089015bed644fedef545c221fc WatchSource:0}: Error finding container cc0624c512051c4322bcc827a847f4339bb47f089015bed644fedef545c221fc: Status 404 returned error can't find the container with id cc0624c512051c4322bcc827a847f4339bb47f089015bed644fedef545c221fc Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.556929 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"111b5e305abd4eae4d27820e981c65d3876b33ccc882394098da562ffecccfad"} Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.556987 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9aec8997b4f9423980cd215fcd3f273e738f7052d598401e8ce72ca83350e21b"} Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.557134 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.558507 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"250331efda0b027d67193cfaf792b37a63db639cdfc36b14bf457434996403fe"} Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.558644 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"dfc2bb781d2116e2b982c9810a833452a51b9c98d97f6937c2491c698e51ba1e"} Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.561119 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d9c02ae09610a6f6e62f89e0395c0270539f393a91d4a6af67b6d1fd2d485c4e"} Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.561157 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cc0624c512051c4322bcc827a847f4339bb47f089015bed644fedef545c221fc"} Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.609748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.647692 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5mzr9"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.648081 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.648642 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z7gds"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.649205 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.649588 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.650052 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.651317 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.656643 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.658259 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29527200-v2vdw"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.658720 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29527200-v2vdw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.659227 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.659684 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.660036 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.660260 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.660541 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.660616 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.660706 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.660740 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.660913 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.661036 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.663027 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.663212 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.663335 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6p499"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.663861 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b4h9p"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.679539 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.680661 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.681026 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.691162 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.691425 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.691662 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.692978 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.693927 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.694212 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.694412 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.694670 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.694814 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.695061 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.695395 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ttg62"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.695676 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.696063 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-p7z7k"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.696330 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.696713 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.697175 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.697508 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.697560 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.697642 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.697698 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.697787 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.697823 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.697830 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-p7z7k" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.697896 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.697971 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.698006 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.698041 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.697575 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.698152 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.697897 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.698281 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.698536 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.700338 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.701073 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.701191 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.701266 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.703696 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.704314 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.705710 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c2cjf"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.706208 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sqqlg"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.706595 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.707020 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.710559 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.710748 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.710891 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.711019 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.711201 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.713669 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.713805 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.713974 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.714150 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.714299 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.714449 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.714590 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.714691 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.714757 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rp9n4"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.714812 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.714988 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.715111 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.715221 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.715295 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.715375 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.715500 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.715681 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.716040 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.716152 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.716205 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.716346 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.716098 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.716480 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.717981 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.718921 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.722738 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5mzr9"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.725967 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z7gds"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.726000 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wxl66"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.726631 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hltcd"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.727596 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.727624 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.728050 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.728375 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.728590 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.737396 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.738570 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.738694 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-hltcd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.739419 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.741527 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.742357 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.743175 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.743270 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.743848 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.744024 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.744201 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.744369 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.744526 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.744860 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.753132 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.753348 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.754204 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.754530 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.754747 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.754758 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-95lrd"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.755274 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.755583 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.755670 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.756014 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.756199 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.756994 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.758703 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.758772 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.758797 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.758927 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.759032 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.759089 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.759163 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.759208 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.760117 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.763749 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.764194 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.764732 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.769771 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.769860 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.771265 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.772641 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.772879 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.774905 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.782606 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-866vd"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.783123 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.783546 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.783832 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.784354 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.784916 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.785148 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.787972 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.788666 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.789878 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.790412 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.791935 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l28vp"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.794021 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-p7z7k"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.794112 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l28vp" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795095 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-688jn\" (UniqueName: \"kubernetes.io/projected/c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f-kube-api-access-688jn\") pod \"machine-approver-56656f9798-k5djw\" (UID: \"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795128 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-client-ca\") pod \"route-controller-manager-6576b87f9c-gcll8\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795155 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20f37963-2f14-4031-ba6a-5a2b2908ed09-audit-policies\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795172 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5zwk\" (UniqueName: \"kubernetes.io/projected/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-kube-api-access-h5zwk\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795187 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20f37963-2f14-4031-ba6a-5a2b2908ed09-audit-dir\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795202 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hdv5\" (UID: \"44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795217 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4651f24-ec65-46fe-875a-7bf52568a045-trusted-ca\") pod \"console-operator-58897d9998-sqqlg\" (UID: \"c4651f24-ec65-46fe-875a-7bf52568a045\") " pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795232 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt9v7\" (UniqueName: \"kubernetes.io/projected/ba763110-213a-49ec-9385-e723b6a02fc8-kube-api-access-xt9v7\") pod \"downloads-7954f5f757-p7z7k\" (UID: \"ba763110-213a-49ec-9385-e723b6a02fc8\") " pod="openshift-console/downloads-7954f5f757-p7z7k" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795250 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs6gw\" (UniqueName: \"kubernetes.io/projected/5b7d4c04-0a49-410b-a682-b58b6b97a987-kube-api-access-xs6gw\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795265 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdplw\" (UniqueName: \"kubernetes.io/projected/44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6-kube-api-access-mdplw\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hdv5\" (UID: \"44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795281 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20f37963-2f14-4031-ba6a-5a2b2908ed09-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795297 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795310 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-serving-cert\") pod \"route-controller-manager-6576b87f9c-gcll8\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795325 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvbfn\" (UniqueName: \"kubernetes.io/projected/113599a7-fd8d-4bcb-9e1a-ce992776990a-kube-api-access-rvbfn\") pod \"cluster-samples-operator-665b6dd947-546xm\" (UID: \"113599a7-fd8d-4bcb-9e1a-ce992776990a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795344 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795358 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795382 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb679c27-8952-42f2-914e-13e4a969c408-node-pullsecrets\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795400 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbwgt\" (UniqueName: \"kubernetes.io/projected/20f37963-2f14-4031-ba6a-5a2b2908ed09-kube-api-access-zbwgt\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795416 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4651f24-ec65-46fe-875a-7bf52568a045-config\") pod \"console-operator-58897d9998-sqqlg\" (UID: \"c4651f24-ec65-46fe-875a-7bf52568a045\") " pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795431 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xglkd\" (UniqueName: \"kubernetes.io/projected/d6be412a-8d96-40ac-967b-96ad68c91c1a-kube-api-access-xglkd\") pod \"openshift-config-operator-7777fb866f-2zz7r\" (UID: \"d6be412a-8d96-40ac-967b-96ad68c91c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795447 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795462 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795475 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-audit\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795488 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/113599a7-fd8d-4bcb-9e1a-ce992776990a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-546xm\" (UID: \"113599a7-fd8d-4bcb-9e1a-ce992776990a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795505 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6rxn\" (UniqueName: \"kubernetes.io/projected/1b8a1e85-beca-4608-b727-1ee293d094b2-kube-api-access-q6rxn\") pod \"openshift-apiserver-operator-796bbdcf4f-lq7m7\" (UID: \"1b8a1e85-beca-4608-b727-1ee293d094b2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795519 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-image-import-ca\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795533 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f37963-2f14-4031-ba6a-5a2b2908ed09-serving-cert\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795547 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20f37963-2f14-4031-ba6a-5a2b2908ed09-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795564 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-audit-policies\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795578 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f-config\") pod \"machine-approver-56656f9798-k5djw\" (UID: \"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795591 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwbnr\" (UniqueName: \"kubernetes.io/projected/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-kube-api-access-zwbnr\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795615 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795629 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-serving-cert\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795645 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-audit-dir\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795662 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgr8s\" (UniqueName: \"kubernetes.io/projected/eb679c27-8952-42f2-914e-13e4a969c408-kube-api-access-sgr8s\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795676 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b7d4c04-0a49-410b-a682-b58b6b97a987-service-ca\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795689 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hdv5\" (UID: \"44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795702 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f-machine-approver-tls\") pod \"machine-approver-56656f9798-k5djw\" (UID: \"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795717 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795732 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-config\") pod \"route-controller-manager-6576b87f9c-gcll8\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795747 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6fx7\" (UniqueName: \"kubernetes.io/projected/81ed0256-3be9-4994-85ee-f35c6be1bf63-kube-api-access-g6fx7\") pod \"machine-api-operator-5694c8668f-z7gds\" (UID: \"81ed0256-3be9-4994-85ee-f35c6be1bf63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795763 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-config\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795780 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b7d4c04-0a49-410b-a682-b58b6b97a987-console-oauth-config\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795794 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5hbl\" (UniqueName: \"kubernetes.io/projected/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-kube-api-access-f5hbl\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795809 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6be412a-8d96-40ac-967b-96ad68c91c1a-serving-cert\") pod \"openshift-config-operator-7777fb866f-2zz7r\" (UID: \"d6be412a-8d96-40ac-967b-96ad68c91c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795827 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d6be412a-8d96-40ac-967b-96ad68c91c1a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2zz7r\" (UID: \"d6be412a-8d96-40ac-967b-96ad68c91c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795846 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795863 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b7d4c04-0a49-410b-a682-b58b6b97a987-console-config\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795879 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-serving-cert\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795895 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795911 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b8a1e85-beca-4608-b727-1ee293d094b2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lq7m7\" (UID: \"1b8a1e85-beca-4608-b727-1ee293d094b2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795928 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhvf\" (UniqueName: \"kubernetes.io/projected/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-kube-api-access-qrhvf\") pod \"route-controller-manager-6576b87f9c-gcll8\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795958 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795976 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.795993 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b7d4c04-0a49-410b-a682-b58b6b97a987-console-serving-cert\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796008 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b7d4c04-0a49-410b-a682-b58b6b97a987-oauth-serving-cert\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796023 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796037 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkvzq\" (UniqueName: \"kubernetes.io/projected/c4651f24-ec65-46fe-875a-7bf52568a045-kube-api-access-rkvzq\") pod \"console-operator-58897d9998-sqqlg\" (UID: \"c4651f24-ec65-46fe-875a-7bf52568a045\") " pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796051 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb679c27-8952-42f2-914e-13e4a969c408-serving-cert\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796065 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b7d4c04-0a49-410b-a682-b58b6b97a987-trusted-ca-bundle\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796079 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb679c27-8952-42f2-914e-13e4a969c408-audit-dir\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796093 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-service-ca-bundle\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796108 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/81ed0256-3be9-4994-85ee-f35c6be1bf63-images\") pod \"machine-api-operator-5694c8668f-z7gds\" (UID: \"81ed0256-3be9-4994-85ee-f35c6be1bf63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796136 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4651f24-ec65-46fe-875a-7bf52568a045-serving-cert\") pod \"console-operator-58897d9998-sqqlg\" (UID: \"c4651f24-ec65-46fe-875a-7bf52568a045\") " pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796151 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dphl8\" (UniqueName: \"kubernetes.io/projected/bdfac2f0-2a56-43ab-b248-a9c523e41856-kube-api-access-dphl8\") pod \"image-pruner-29527200-v2vdw\" (UID: \"bdfac2f0-2a56-43ab-b248-a9c523e41856\") " pod="openshift-image-registry/image-pruner-29527200-v2vdw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796166 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/81ed0256-3be9-4994-85ee-f35c6be1bf63-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z7gds\" (UID: \"81ed0256-3be9-4994-85ee-f35c6be1bf63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796180 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb679c27-8952-42f2-914e-13e4a969c408-encryption-config\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796196 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796209 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20f37963-2f14-4031-ba6a-5a2b2908ed09-etcd-client\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796223 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796239 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-config\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796397 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b8a1e85-beca-4608-b727-1ee293d094b2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lq7m7\" (UID: \"1b8a1e85-beca-4608-b727-1ee293d094b2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796414 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bdfac2f0-2a56-43ab-b248-a9c523e41856-serviceca\") pod \"image-pruner-29527200-v2vdw\" (UID: \"bdfac2f0-2a56-43ab-b248-a9c523e41856\") " pod="openshift-image-registry/image-pruner-29527200-v2vdw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796429 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-config\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796444 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20f37963-2f14-4031-ba6a-5a2b2908ed09-encryption-config\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796458 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ed0256-3be9-4994-85ee-f35c6be1bf63-config\") pod \"machine-api-operator-5694c8668f-z7gds\" (UID: \"81ed0256-3be9-4994-85ee-f35c6be1bf63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796474 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f-auth-proxy-config\") pod \"machine-approver-56656f9798-k5djw\" (UID: \"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796488 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-client-ca\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796501 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb679c27-8952-42f2-914e-13e4a969c408-etcd-client\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.796515 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-etcd-serving-ca\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.799752 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6p499"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.803221 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b4h9p"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.803955 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29527200-v2vdw"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.806463 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-29g9v"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.807320 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-29g9v" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.828770 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.829279 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.829422 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.832185 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.836335 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ttg62"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.840316 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.841108 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.841936 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.842636 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.842717 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.842730 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.843070 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.844973 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sqqlg"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.847246 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.847872 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.848004 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.848390 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.848422 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pt6nb"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.849216 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.851413 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6sws4"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.852226 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-87wrr"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.852443 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.853014 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.855869 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kmpmr"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.856353 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kmpmr" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.860924 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.861345 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rp9n4"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.862359 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.865543 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.866125 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.866448 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.866527 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.866754 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.869322 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.869347 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-866vd"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.870637 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.872154 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hltcd"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.873876 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.874900 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.876788 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c2cjf"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.878874 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.880331 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.880926 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.884264 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.885843 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-95lrd"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.887272 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.888242 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.889194 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.890430 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.891263 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.892232 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l28vp"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.894208 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8thm5"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.894764 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-kv84q"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.895428 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kv84q" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.895732 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8thm5" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.896930 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-688jn\" (UniqueName: \"kubernetes.io/projected/c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f-kube-api-access-688jn\") pod \"machine-approver-56656f9798-k5djw\" (UID: \"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.896974 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-client-ca\") pod \"route-controller-manager-6576b87f9c-gcll8\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.896997 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-serving-cert\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897014 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20f37963-2f14-4031-ba6a-5a2b2908ed09-audit-dir\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897030 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-config\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897046 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs6gw\" (UniqueName: \"kubernetes.io/projected/5b7d4c04-0a49-410b-a682-b58b6b97a987-kube-api-access-xs6gw\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897061 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdplw\" (UniqueName: \"kubernetes.io/projected/44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6-kube-api-access-mdplw\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hdv5\" (UID: \"44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897079 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3ca121a-9e67-478c-bbe4-484e36eb185f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9sjwc\" (UID: \"b3ca121a-9e67-478c-bbe4-484e36eb185f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897093 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-692cl\" (UID: \"ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897108 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-serving-cert\") pod \"route-controller-manager-6576b87f9c-gcll8\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897125 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvbfn\" (UniqueName: \"kubernetes.io/projected/113599a7-fd8d-4bcb-9e1a-ce992776990a-kube-api-access-rvbfn\") pod \"cluster-samples-operator-665b6dd947-546xm\" (UID: \"113599a7-fd8d-4bcb-9e1a-ce992776990a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897140 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9qlq\" (UniqueName: \"kubernetes.io/projected/49ec45fc-19c2-439f-b7cf-77fcfd764ae3-kube-api-access-r9qlq\") pod \"cluster-image-registry-operator-dc59b4c8b-2jp56\" (UID: \"49ec45fc-19c2-439f-b7cf-77fcfd764ae3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897159 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897174 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3ca121a-9e67-478c-bbe4-484e36eb185f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9sjwc\" (UID: \"b3ca121a-9e67-478c-bbe4-484e36eb185f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897189 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbwgt\" (UniqueName: \"kubernetes.io/projected/20f37963-2f14-4031-ba6a-5a2b2908ed09-kube-api-access-zbwgt\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897204 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897221 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897238 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xglkd\" (UniqueName: \"kubernetes.io/projected/d6be412a-8d96-40ac-967b-96ad68c91c1a-kube-api-access-xglkd\") pod \"openshift-config-operator-7777fb866f-2zz7r\" (UID: \"d6be412a-8d96-40ac-967b-96ad68c91c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897254 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-audit\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897270 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/113599a7-fd8d-4bcb-9e1a-ce992776990a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-546xm\" (UID: \"113599a7-fd8d-4bcb-9e1a-ce992776990a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897287 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6rxn\" (UniqueName: \"kubernetes.io/projected/1b8a1e85-beca-4608-b727-1ee293d094b2-kube-api-access-q6rxn\") pod \"openshift-apiserver-operator-796bbdcf4f-lq7m7\" (UID: \"1b8a1e85-beca-4608-b727-1ee293d094b2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897303 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-image-import-ca\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897317 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd-config\") pod \"kube-apiserver-operator-766d6c64bb-692cl\" (UID: \"ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897332 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/49ec45fc-19c2-439f-b7cf-77fcfd764ae3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2jp56\" (UID: \"49ec45fc-19c2-439f-b7cf-77fcfd764ae3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897349 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-audit-policies\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897364 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20f37963-2f14-4031-ba6a-5a2b2908ed09-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897379 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f-config\") pod \"machine-approver-56656f9798-k5djw\" (UID: \"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897396 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49ec45fc-19c2-439f-b7cf-77fcfd764ae3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2jp56\" (UID: \"49ec45fc-19c2-439f-b7cf-77fcfd764ae3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897415 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897429 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-serving-cert\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897444 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjxb7\" (UniqueName: \"kubernetes.io/projected/dbd3805f-f503-444d-88c7-829832077376-kube-api-access-qjxb7\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897486 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6fx7\" (UniqueName: \"kubernetes.io/projected/81ed0256-3be9-4994-85ee-f35c6be1bf63-kube-api-access-g6fx7\") pod \"machine-api-operator-5694c8668f-z7gds\" (UID: \"81ed0256-3be9-4994-85ee-f35c6be1bf63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897504 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab327ef5-0e37-4dca-b380-5a0a3e1060af-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxmlj\" (UID: \"ab327ef5-0e37-4dca-b380-5a0a3e1060af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897522 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-config\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897539 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897556 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2wzh\" (UniqueName: \"kubernetes.io/projected/719ba80e-56f2-49ca-96ed-7f53a9159916-kube-api-access-c2wzh\") pod \"ingress-operator-5b745b69d9-866vd\" (UID: \"719ba80e-56f2-49ca-96ed-7f53a9159916\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897573 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6be412a-8d96-40ac-967b-96ad68c91c1a-serving-cert\") pod \"openshift-config-operator-7777fb866f-2zz7r\" (UID: \"d6be412a-8d96-40ac-967b-96ad68c91c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897587 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9x9n\" (UniqueName: \"kubernetes.io/projected/9eb7868f-81d4-4a88-99bb-807599f57b97-kube-api-access-n9x9n\") pod \"migrator-59844c95c7-l28vp\" (UID: \"9eb7868f-81d4-4a88-99bb-807599f57b97\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l28vp" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897602 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvkg7\" (UniqueName: \"kubernetes.io/projected/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-kube-api-access-vvkg7\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897623 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897638 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-serving-cert\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897653 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbd3805f-f503-444d-88c7-829832077376-service-ca-bundle\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897677 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrhvf\" (UniqueName: \"kubernetes.io/projected/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-kube-api-access-qrhvf\") pod \"route-controller-manager-6576b87f9c-gcll8\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897692 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897709 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3334b1ca-87b2-436d-a994-15634b8240e2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4mngz\" (UID: \"3334b1ca-87b2-436d-a994-15634b8240e2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897725 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b7d4c04-0a49-410b-a682-b58b6b97a987-console-serving-cert\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897742 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkvzq\" (UniqueName: \"kubernetes.io/projected/c4651f24-ec65-46fe-875a-7bf52568a045-kube-api-access-rkvzq\") pod \"console-operator-58897d9998-sqqlg\" (UID: \"c4651f24-ec65-46fe-875a-7bf52568a045\") " pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897757 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49ec45fc-19c2-439f-b7cf-77fcfd764ae3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2jp56\" (UID: \"49ec45fc-19c2-439f-b7cf-77fcfd764ae3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897773 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-service-ca-bundle\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897788 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbpnf\" (UniqueName: \"kubernetes.io/projected/ab327ef5-0e37-4dca-b380-5a0a3e1060af-kube-api-access-tbpnf\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxmlj\" (UID: \"ab327ef5-0e37-4dca-b380-5a0a3e1060af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897813 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/108cc2af-75e9-47f2-a82e-03a59074bef8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kqglz\" (UID: \"108cc2af-75e9-47f2-a82e-03a59074bef8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897829 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4651f24-ec65-46fe-875a-7bf52568a045-serving-cert\") pod \"console-operator-58897d9998-sqqlg\" (UID: \"c4651f24-ec65-46fe-875a-7bf52568a045\") " pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897845 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dphl8\" (UniqueName: \"kubernetes.io/projected/bdfac2f0-2a56-43ab-b248-a9c523e41856-kube-api-access-dphl8\") pod \"image-pruner-29527200-v2vdw\" (UID: \"bdfac2f0-2a56-43ab-b248-a9c523e41856\") " pod="openshift-image-registry/image-pruner-29527200-v2vdw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897861 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897877 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/81ed0256-3be9-4994-85ee-f35c6be1bf63-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z7gds\" (UID: \"81ed0256-3be9-4994-85ee-f35c6be1bf63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897892 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b8a1e85-beca-4608-b727-1ee293d094b2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lq7m7\" (UID: \"1b8a1e85-beca-4608-b727-1ee293d094b2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897908 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab327ef5-0e37-4dca-b380-5a0a3e1060af-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxmlj\" (UID: \"ab327ef5-0e37-4dca-b380-5a0a3e1060af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897924 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bdfac2f0-2a56-43ab-b248-a9c523e41856-serviceca\") pod \"image-pruner-29527200-v2vdw\" (UID: \"bdfac2f0-2a56-43ab-b248-a9c523e41856\") " pod="openshift-image-registry/image-pruner-29527200-v2vdw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897952 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c34885d2-a418-472a-92fe-161f64359e5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-29g9v\" (UID: \"c34885d2-a418-472a-92fe-161f64359e5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-29g9v" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897968 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ed0256-3be9-4994-85ee-f35c6be1bf63-config\") pod \"machine-api-operator-5694c8668f-z7gds\" (UID: \"81ed0256-3be9-4994-85ee-f35c6be1bf63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.897983 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20f37963-2f14-4031-ba6a-5a2b2908ed09-encryption-config\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898025 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb679c27-8952-42f2-914e-13e4a969c408-etcd-client\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898046 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f-auth-proxy-config\") pod \"machine-approver-56656f9798-k5djw\" (UID: \"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898064 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-etcd-serving-ca\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898081 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20f37963-2f14-4031-ba6a-5a2b2908ed09-audit-policies\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898095 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5zwk\" (UniqueName: \"kubernetes.io/projected/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-kube-api-access-h5zwk\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898111 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hdv5\" (UID: \"44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898126 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4651f24-ec65-46fe-875a-7bf52568a045-trusted-ca\") pod \"console-operator-58897d9998-sqqlg\" (UID: \"c4651f24-ec65-46fe-875a-7bf52568a045\") " pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898142 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt9v7\" (UniqueName: \"kubernetes.io/projected/ba763110-213a-49ec-9385-e723b6a02fc8-kube-api-access-xt9v7\") pod \"downloads-7954f5f757-p7z7k\" (UID: \"ba763110-213a-49ec-9385-e723b6a02fc8\") " pod="openshift-console/downloads-7954f5f757-p7z7k" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898157 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20f37963-2f14-4031-ba6a-5a2b2908ed09-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898174 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898189 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898203 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb679c27-8952-42f2-914e-13e4a969c408-node-pullsecrets\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898220 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4651f24-ec65-46fe-875a-7bf52568a045-config\") pod \"console-operator-58897d9998-sqqlg\" (UID: \"c4651f24-ec65-46fe-875a-7bf52568a045\") " pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898237 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-692cl\" (UID: \"ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898253 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f37963-2f14-4031-ba6a-5a2b2908ed09-serving-cert\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898269 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwbnr\" (UniqueName: \"kubernetes.io/projected/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-kube-api-access-zwbnr\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898285 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dbd3805f-f503-444d-88c7-829832077376-default-certificate\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898309 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-audit-dir\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898335 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgr8s\" (UniqueName: \"kubernetes.io/projected/eb679c27-8952-42f2-914e-13e4a969c408-kube-api-access-sgr8s\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898351 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hdv5\" (UID: \"44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898367 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f-machine-approver-tls\") pod \"machine-approver-56656f9798-k5djw\" (UID: \"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898382 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b7d4c04-0a49-410b-a682-b58b6b97a987-service-ca\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898397 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b7d4c04-0a49-410b-a682-b58b6b97a987-console-oauth-config\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898413 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-config\") pod \"route-controller-manager-6576b87f9c-gcll8\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898429 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/108cc2af-75e9-47f2-a82e-03a59074bef8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kqglz\" (UID: \"108cc2af-75e9-47f2-a82e-03a59074bef8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898446 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5hbl\" (UniqueName: \"kubernetes.io/projected/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-kube-api-access-f5hbl\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898463 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d6be412a-8d96-40ac-967b-96ad68c91c1a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2zz7r\" (UID: \"d6be412a-8d96-40ac-967b-96ad68c91c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898480 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ca121a-9e67-478c-bbe4-484e36eb185f-config\") pod \"kube-controller-manager-operator-78b949d7b-9sjwc\" (UID: \"b3ca121a-9e67-478c-bbe4-484e36eb185f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898495 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b7d4c04-0a49-410b-a682-b58b6b97a987-console-config\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898510 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898527 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b8a1e85-beca-4608-b727-1ee293d094b2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lq7m7\" (UID: \"1b8a1e85-beca-4608-b727-1ee293d094b2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898542 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/719ba80e-56f2-49ca-96ed-7f53a9159916-metrics-tls\") pod \"ingress-operator-5b745b69d9-866vd\" (UID: \"719ba80e-56f2-49ca-96ed-7f53a9159916\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898559 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898574 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-etcd-client\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898688 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/108cc2af-75e9-47f2-a82e-03a59074bef8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kqglz\" (UID: \"108cc2af-75e9-47f2-a82e-03a59074bef8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898705 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-etcd-service-ca\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898720 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/719ba80e-56f2-49ca-96ed-7f53a9159916-trusted-ca\") pod \"ingress-operator-5b745b69d9-866vd\" (UID: \"719ba80e-56f2-49ca-96ed-7f53a9159916\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898735 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b7d4c04-0a49-410b-a682-b58b6b97a987-trusted-ca-bundle\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898750 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b7d4c04-0a49-410b-a682-b58b6b97a987-oauth-serving-cert\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898765 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898781 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb679c27-8952-42f2-914e-13e4a969c408-serving-cert\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898797 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb679c27-8952-42f2-914e-13e4a969c408-audit-dir\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898812 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/81ed0256-3be9-4994-85ee-f35c6be1bf63-images\") pod \"machine-api-operator-5694c8668f-z7gds\" (UID: \"81ed0256-3be9-4994-85ee-f35c6be1bf63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898828 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20f37963-2f14-4031-ba6a-5a2b2908ed09-etcd-client\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898844 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb679c27-8952-42f2-914e-13e4a969c408-encryption-config\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898862 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898877 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-config\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898892 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgt2x\" (UniqueName: \"kubernetes.io/projected/b712f1fb-8e1e-43df-8cec-27a727d1fa4f-kube-api-access-wgt2x\") pod \"dns-operator-744455d44c-hltcd\" (UID: \"b712f1fb-8e1e-43df-8cec-27a727d1fa4f\") " pod="openshift-dns-operator/dns-operator-744455d44c-hltcd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898909 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-etcd-ca\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898923 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/719ba80e-56f2-49ca-96ed-7f53a9159916-bound-sa-token\") pod \"ingress-operator-5b745b69d9-866vd\" (UID: \"719ba80e-56f2-49ca-96ed-7f53a9159916\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898967 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57bjg\" (UniqueName: \"kubernetes.io/projected/c34885d2-a418-472a-92fe-161f64359e5b-kube-api-access-57bjg\") pod \"multus-admission-controller-857f4d67dd-29g9v\" (UID: \"c34885d2-a418-472a-92fe-161f64359e5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-29g9v" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898985 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr5rz\" (UniqueName: \"kubernetes.io/projected/3334b1ca-87b2-436d-a994-15634b8240e2-kube-api-access-tr5rz\") pod \"control-plane-machine-set-operator-78cbb6b69f-4mngz\" (UID: \"3334b1ca-87b2-436d-a994-15634b8240e2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.898999 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dbd3805f-f503-444d-88c7-829832077376-stats-auth\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.899014 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbd3805f-f503-444d-88c7-829832077376-metrics-certs\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.899028 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b712f1fb-8e1e-43df-8cec-27a727d1fa4f-metrics-tls\") pod \"dns-operator-744455d44c-hltcd\" (UID: \"b712f1fb-8e1e-43df-8cec-27a727d1fa4f\") " pod="openshift-dns-operator/dns-operator-744455d44c-hltcd" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.899045 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-config\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.899061 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-client-ca\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.899247 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-87wrr"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.899279 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8thm5"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.899290 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.899723 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-client-ca\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.900271 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20f37963-2f14-4031-ba6a-5a2b2908ed09-audit-dir\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.900896 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ed0256-3be9-4994-85ee-f35c6be1bf63-config\") pod \"machine-api-operator-5694c8668f-z7gds\" (UID: \"81ed0256-3be9-4994-85ee-f35c6be1bf63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.901049 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.901406 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-config\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.901646 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.901662 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b8a1e85-beca-4608-b727-1ee293d094b2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lq7m7\" (UID: \"1b8a1e85-beca-4608-b727-1ee293d094b2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.902234 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.902747 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-client-ca\") pod \"route-controller-manager-6576b87f9c-gcll8\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.903252 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b7d4c04-0a49-410b-a682-b58b6b97a987-console-config\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.903359 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bdfac2f0-2a56-43ab-b248-a9c523e41856-serviceca\") pod \"image-pruner-29527200-v2vdw\" (UID: \"bdfac2f0-2a56-43ab-b248-a9c523e41856\") " pod="openshift-image-registry/image-pruner-29527200-v2vdw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.903415 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/81ed0256-3be9-4994-85ee-f35c6be1bf63-images\") pod \"machine-api-operator-5694c8668f-z7gds\" (UID: \"81ed0256-3be9-4994-85ee-f35c6be1bf63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.903460 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-service-ca-bundle\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.915281 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-29g9v"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.916028 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb679c27-8952-42f2-914e-13e4a969c408-encryption-config\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.916188 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b7d4c04-0a49-410b-a682-b58b6b97a987-trusted-ca-bundle\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.916471 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.916808 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.917261 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-config\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.917919 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4651f24-ec65-46fe-875a-7bf52568a045-serving-cert\") pod \"console-operator-58897d9998-sqqlg\" (UID: \"c4651f24-ec65-46fe-875a-7bf52568a045\") " pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.918469 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b7d4c04-0a49-410b-a682-b58b6b97a987-oauth-serving-cert\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.918510 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-config\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.918587 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f-auth-proxy-config\") pod \"machine-approver-56656f9798-k5djw\" (UID: \"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.919215 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.919990 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-etcd-serving-ca\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.920638 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb679c27-8952-42f2-914e-13e4a969c408-node-pullsecrets\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.921292 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-audit-dir\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.921734 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20f37963-2f14-4031-ba6a-5a2b2908ed09-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.922563 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4651f24-ec65-46fe-875a-7bf52568a045-config\") pod \"console-operator-58897d9998-sqqlg\" (UID: \"c4651f24-ec65-46fe-875a-7bf52568a045\") " pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.923374 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb679c27-8952-42f2-914e-13e4a969c408-audit-dir\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.924448 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pt6nb"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.924573 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.924612 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-config\") pod \"route-controller-manager-6576b87f9c-gcll8\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.924689 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d6be412a-8d96-40ac-967b-96ad68c91c1a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2zz7r\" (UID: \"d6be412a-8d96-40ac-967b-96ad68c91c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.924713 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.924571 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.925080 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb679c27-8952-42f2-914e-13e4a969c408-etcd-client\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.925231 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-serving-cert\") pod \"route-controller-manager-6576b87f9c-gcll8\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.925372 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6sws4"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.925593 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-audit\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.925623 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hdv5\" (UID: \"44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.925712 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f-config\") pod \"machine-approver-56656f9798-k5djw\" (UID: \"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.926123 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb679c27-8952-42f2-914e-13e4a969c408-serving-cert\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.926227 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-audit-policies\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.926249 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20f37963-2f14-4031-ba6a-5a2b2908ed09-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.926344 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.926559 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb679c27-8952-42f2-914e-13e4a969c408-image-import-ca\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.926700 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4651f24-ec65-46fe-875a-7bf52568a045-trusted-ca\") pod \"console-operator-58897d9998-sqqlg\" (UID: \"c4651f24-ec65-46fe-875a-7bf52568a045\") " pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.927360 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b7d4c04-0a49-410b-a682-b58b6b97a987-service-ca\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.927577 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.920809 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20f37963-2f14-4031-ba6a-5a2b2908ed09-audit-policies\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.928214 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b8a1e85-beca-4608-b727-1ee293d094b2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lq7m7\" (UID: \"1b8a1e85-beca-4608-b727-1ee293d094b2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.928989 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f37963-2f14-4031-ba6a-5a2b2908ed09-serving-cert\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.929047 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.929058 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20f37963-2f14-4031-ba6a-5a2b2908ed09-etcd-client\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.929150 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-serving-cert\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.929153 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6be412a-8d96-40ac-967b-96ad68c91c1a-serving-cert\") pod \"openshift-config-operator-7777fb866f-2zz7r\" (UID: \"d6be412a-8d96-40ac-967b-96ad68c91c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.929182 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/81ed0256-3be9-4994-85ee-f35c6be1bf63-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z7gds\" (UID: \"81ed0256-3be9-4994-85ee-f35c6be1bf63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.929331 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20f37963-2f14-4031-ba6a-5a2b2908ed09-encryption-config\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.929500 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hdv5\" (UID: \"44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.929863 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.930062 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f-machine-approver-tls\") pod \"machine-approver-56656f9798-k5djw\" (UID: \"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.930293 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-serving-cert\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.930422 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.930826 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kv84q"] Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.931857 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/113599a7-fd8d-4bcb-9e1a-ce992776990a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-546xm\" (UID: \"113599a7-fd8d-4bcb-9e1a-ce992776990a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.932980 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.933163 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.934411 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.934544 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b7d4c04-0a49-410b-a682-b58b6b97a987-console-serving-cert\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.935527 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b7d4c04-0a49-410b-a682-b58b6b97a987-console-oauth-config\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.938805 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.959415 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.978752 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 21 00:09:07 crc kubenswrapper[4730]: I0221 00:09:07.998991 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.000625 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-etcd-service-ca\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.000665 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/719ba80e-56f2-49ca-96ed-7f53a9159916-trusted-ca\") pod \"ingress-operator-5b745b69d9-866vd\" (UID: \"719ba80e-56f2-49ca-96ed-7f53a9159916\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.000692 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgt2x\" (UniqueName: \"kubernetes.io/projected/b712f1fb-8e1e-43df-8cec-27a727d1fa4f-kube-api-access-wgt2x\") pod \"dns-operator-744455d44c-hltcd\" (UID: \"b712f1fb-8e1e-43df-8cec-27a727d1fa4f\") " pod="openshift-dns-operator/dns-operator-744455d44c-hltcd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.000712 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-etcd-ca\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.000736 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/719ba80e-56f2-49ca-96ed-7f53a9159916-bound-sa-token\") pod \"ingress-operator-5b745b69d9-866vd\" (UID: \"719ba80e-56f2-49ca-96ed-7f53a9159916\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.000792 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57bjg\" (UniqueName: \"kubernetes.io/projected/c34885d2-a418-472a-92fe-161f64359e5b-kube-api-access-57bjg\") pod \"multus-admission-controller-857f4d67dd-29g9v\" (UID: \"c34885d2-a418-472a-92fe-161f64359e5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-29g9v" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.000812 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr5rz\" (UniqueName: \"kubernetes.io/projected/3334b1ca-87b2-436d-a994-15634b8240e2-kube-api-access-tr5rz\") pod \"control-plane-machine-set-operator-78cbb6b69f-4mngz\" (UID: \"3334b1ca-87b2-436d-a994-15634b8240e2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.000829 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dbd3805f-f503-444d-88c7-829832077376-stats-auth\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.000848 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbd3805f-f503-444d-88c7-829832077376-metrics-certs\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.000925 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b712f1fb-8e1e-43df-8cec-27a727d1fa4f-metrics-tls\") pod \"dns-operator-744455d44c-hltcd\" (UID: \"b712f1fb-8e1e-43df-8cec-27a727d1fa4f\") " pod="openshift-dns-operator/dns-operator-744455d44c-hltcd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.000969 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-serving-cert\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.000997 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3ca121a-9e67-478c-bbe4-484e36eb185f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9sjwc\" (UID: \"b3ca121a-9e67-478c-bbe4-484e36eb185f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001012 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-692cl\" (UID: \"ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001027 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-config\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001047 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9qlq\" (UniqueName: \"kubernetes.io/projected/49ec45fc-19c2-439f-b7cf-77fcfd764ae3-kube-api-access-r9qlq\") pod \"cluster-image-registry-operator-dc59b4c8b-2jp56\" (UID: \"49ec45fc-19c2-439f-b7cf-77fcfd764ae3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001064 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3ca121a-9e67-478c-bbe4-484e36eb185f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9sjwc\" (UID: \"b3ca121a-9e67-478c-bbe4-484e36eb185f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001098 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd-config\") pod \"kube-apiserver-operator-766d6c64bb-692cl\" (UID: \"ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001117 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/49ec45fc-19c2-439f-b7cf-77fcfd764ae3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2jp56\" (UID: \"49ec45fc-19c2-439f-b7cf-77fcfd764ae3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001137 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjxb7\" (UniqueName: \"kubernetes.io/projected/dbd3805f-f503-444d-88c7-829832077376-kube-api-access-qjxb7\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001153 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49ec45fc-19c2-439f-b7cf-77fcfd764ae3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2jp56\" (UID: \"49ec45fc-19c2-439f-b7cf-77fcfd764ae3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001176 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab327ef5-0e37-4dca-b380-5a0a3e1060af-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxmlj\" (UID: \"ab327ef5-0e37-4dca-b380-5a0a3e1060af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001190 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2wzh\" (UniqueName: \"kubernetes.io/projected/719ba80e-56f2-49ca-96ed-7f53a9159916-kube-api-access-c2wzh\") pod \"ingress-operator-5b745b69d9-866vd\" (UID: \"719ba80e-56f2-49ca-96ed-7f53a9159916\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001206 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9x9n\" (UniqueName: \"kubernetes.io/projected/9eb7868f-81d4-4a88-99bb-807599f57b97-kube-api-access-n9x9n\") pod \"migrator-59844c95c7-l28vp\" (UID: \"9eb7868f-81d4-4a88-99bb-807599f57b97\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l28vp" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001223 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvkg7\" (UniqueName: \"kubernetes.io/projected/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-kube-api-access-vvkg7\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001243 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbd3805f-f503-444d-88c7-829832077376-service-ca-bundle\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001262 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3334b1ca-87b2-436d-a994-15634b8240e2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4mngz\" (UID: \"3334b1ca-87b2-436d-a994-15634b8240e2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001287 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49ec45fc-19c2-439f-b7cf-77fcfd764ae3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2jp56\" (UID: \"49ec45fc-19c2-439f-b7cf-77fcfd764ae3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001305 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbpnf\" (UniqueName: \"kubernetes.io/projected/ab327ef5-0e37-4dca-b380-5a0a3e1060af-kube-api-access-tbpnf\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxmlj\" (UID: \"ab327ef5-0e37-4dca-b380-5a0a3e1060af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001320 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/108cc2af-75e9-47f2-a82e-03a59074bef8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kqglz\" (UID: \"108cc2af-75e9-47f2-a82e-03a59074bef8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001342 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab327ef5-0e37-4dca-b380-5a0a3e1060af-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxmlj\" (UID: \"ab327ef5-0e37-4dca-b380-5a0a3e1060af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001361 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c34885d2-a418-472a-92fe-161f64359e5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-29g9v\" (UID: \"c34885d2-a418-472a-92fe-161f64359e5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-29g9v" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001397 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-692cl\" (UID: \"ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001419 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dbd3805f-f503-444d-88c7-829832077376-default-certificate\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001457 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/108cc2af-75e9-47f2-a82e-03a59074bef8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kqglz\" (UID: \"108cc2af-75e9-47f2-a82e-03a59074bef8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001481 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ca121a-9e67-478c-bbe4-484e36eb185f-config\") pod \"kube-controller-manager-operator-78b949d7b-9sjwc\" (UID: \"b3ca121a-9e67-478c-bbe4-484e36eb185f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001498 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/719ba80e-56f2-49ca-96ed-7f53a9159916-metrics-tls\") pod \"ingress-operator-5b745b69d9-866vd\" (UID: \"719ba80e-56f2-49ca-96ed-7f53a9159916\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001514 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-etcd-client\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.001531 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/108cc2af-75e9-47f2-a82e-03a59074bef8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kqglz\" (UID: \"108cc2af-75e9-47f2-a82e-03a59074bef8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.002376 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbd3805f-f503-444d-88c7-829832077376-service-ca-bundle\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.002651 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49ec45fc-19c2-439f-b7cf-77fcfd764ae3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2jp56\" (UID: \"49ec45fc-19c2-439f-b7cf-77fcfd764ae3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.004310 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbd3805f-f503-444d-88c7-829832077376-metrics-certs\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.004482 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dbd3805f-f503-444d-88c7-829832077376-stats-auth\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.005132 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/49ec45fc-19c2-439f-b7cf-77fcfd764ae3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2jp56\" (UID: \"49ec45fc-19c2-439f-b7cf-77fcfd764ae3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.005773 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dbd3805f-f503-444d-88c7-829832077376-default-certificate\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.021481 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.038886 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.059791 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.078634 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.098853 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.105428 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b712f1fb-8e1e-43df-8cec-27a727d1fa4f-metrics-tls\") pod \"dns-operator-744455d44c-hltcd\" (UID: \"b712f1fb-8e1e-43df-8cec-27a727d1fa4f\") " pod="openshift-dns-operator/dns-operator-744455d44c-hltcd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.120318 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.138702 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.159607 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.162668 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ca121a-9e67-478c-bbe4-484e36eb185f-config\") pod \"kube-controller-manager-operator-78b949d7b-9sjwc\" (UID: \"b3ca121a-9e67-478c-bbe4-484e36eb185f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.178867 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.199336 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.204662 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3ca121a-9e67-478c-bbe4-484e36eb185f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9sjwc\" (UID: \"b3ca121a-9e67-478c-bbe4-484e36eb185f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.238818 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.259139 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.278737 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.284371 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-serving-cert\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.298925 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.304837 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-etcd-client\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.319048 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.322310 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-config\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.339822 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.341523 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-etcd-ca\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.359838 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.361292 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-etcd-service-ca\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.380487 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.400147 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.419697 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.428811 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-692cl\" (UID: \"ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.439423 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.442479 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd-config\") pod \"kube-apiserver-operator-766d6c64bb-692cl\" (UID: \"ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.459857 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.479132 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.499599 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.519734 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.540449 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.548004 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/108cc2af-75e9-47f2-a82e-03a59074bef8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kqglz\" (UID: \"108cc2af-75e9-47f2-a82e-03a59074bef8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.566302 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.576641 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/108cc2af-75e9-47f2-a82e-03a59074bef8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kqglz\" (UID: \"108cc2af-75e9-47f2-a82e-03a59074bef8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.580766 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.599653 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.605501 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/719ba80e-56f2-49ca-96ed-7f53a9159916-metrics-tls\") pod \"ingress-operator-5b745b69d9-866vd\" (UID: \"719ba80e-56f2-49ca-96ed-7f53a9159916\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.626290 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.634085 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/719ba80e-56f2-49ca-96ed-7f53a9159916-trusted-ca\") pod \"ingress-operator-5b745b69d9-866vd\" (UID: \"719ba80e-56f2-49ca-96ed-7f53a9159916\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.639674 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.660128 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.665534 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab327ef5-0e37-4dca-b380-5a0a3e1060af-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxmlj\" (UID: \"ab327ef5-0e37-4dca-b380-5a0a3e1060af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.679196 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.699164 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.719445 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.739382 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.742453 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab327ef5-0e37-4dca-b380-5a0a3e1060af-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxmlj\" (UID: \"ab327ef5-0e37-4dca-b380-5a0a3e1060af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.759318 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.779546 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.797900 4730 request.go:700] Waited for 1.007293913s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0 Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.799955 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.807377 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3334b1ca-87b2-436d-a994-15634b8240e2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4mngz\" (UID: \"3334b1ca-87b2-436d-a994-15634b8240e2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.819316 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.839022 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.860356 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.879279 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.899750 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.919078 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.933565 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c34885d2-a418-472a-92fe-161f64359e5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-29g9v\" (UID: \"c34885d2-a418-472a-92fe-161f64359e5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-29g9v" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.959519 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.978881 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 21 00:09:08 crc kubenswrapper[4730]: I0221 00:09:08.998936 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.018929 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.039332 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.060058 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.078681 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.098694 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.119166 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.140339 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.159140 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.181241 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.199407 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.229869 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.239499 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.259995 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.279512 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.299205 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.319405 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.339378 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.358966 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.380779 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.400300 4730 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.419079 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.439295 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.459235 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.481451 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.499599 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.518903 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.539720 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.559838 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.582188 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.598759 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.620137 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.638999 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.658971 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.679463 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.698994 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.719281 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.738745 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.760534 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.797065 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-688jn\" (UniqueName: \"kubernetes.io/projected/c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f-kube-api-access-688jn\") pod \"machine-approver-56656f9798-k5djw\" (UID: \"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.798071 4730 request.go:700] Waited for 1.897715349s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.817606 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs6gw\" (UniqueName: \"kubernetes.io/projected/5b7d4c04-0a49-410b-a682-b58b6b97a987-kube-api-access-xs6gw\") pod \"console-f9d7485db-ttg62\" (UID: \"5b7d4c04-0a49-410b-a682-b58b6b97a987\") " pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.836592 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdplw\" (UniqueName: \"kubernetes.io/projected/44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6-kube-api-access-mdplw\") pod \"openshift-controller-manager-operator-756b6f6bc6-7hdv5\" (UID: \"44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.861496 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6fx7\" (UniqueName: \"kubernetes.io/projected/81ed0256-3be9-4994-85ee-f35c6be1bf63-kube-api-access-g6fx7\") pod \"machine-api-operator-5694c8668f-z7gds\" (UID: \"81ed0256-3be9-4994-85ee-f35c6be1bf63\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.877672 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrhvf\" (UniqueName: \"kubernetes.io/projected/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-kube-api-access-qrhvf\") pod \"route-controller-manager-6576b87f9c-gcll8\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.896878 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dphl8\" (UniqueName: \"kubernetes.io/projected/bdfac2f0-2a56-43ab-b248-a9c523e41856-kube-api-access-dphl8\") pod \"image-pruner-29527200-v2vdw\" (UID: \"bdfac2f0-2a56-43ab-b248-a9c523e41856\") " pod="openshift-image-registry/image-pruner-29527200-v2vdw" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.918603 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkvzq\" (UniqueName: \"kubernetes.io/projected/c4651f24-ec65-46fe-875a-7bf52568a045-kube-api-access-rkvzq\") pod \"console-operator-58897d9998-sqqlg\" (UID: \"c4651f24-ec65-46fe-875a-7bf52568a045\") " pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.933594 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvbfn\" (UniqueName: \"kubernetes.io/projected/113599a7-fd8d-4bcb-9e1a-ce992776990a-kube-api-access-rvbfn\") pod \"cluster-samples-operator-665b6dd947-546xm\" (UID: \"113599a7-fd8d-4bcb-9e1a-ce992776990a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.941103 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.951763 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbwgt\" (UniqueName: \"kubernetes.io/projected/20f37963-2f14-4031-ba6a-5a2b2908ed09-kube-api-access-zbwgt\") pod \"apiserver-7bbb656c7d-pzhn2\" (UID: \"20f37963-2f14-4031-ba6a-5a2b2908ed09\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.951997 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:09 crc kubenswrapper[4730]: W0221 00:09:09.978627 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4a5e192_cdaf_4dda_9aa9_93d2d5ced68f.slice/crio-f23951f2d52a8a86078670e0406b3573319666f21dc838a629aed8b613294df4 WatchSource:0}: Error finding container f23951f2d52a8a86078670e0406b3573319666f21dc838a629aed8b613294df4: Status 404 returned error can't find the container with id f23951f2d52a8a86078670e0406b3573319666f21dc838a629aed8b613294df4 Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.981670 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgr8s\" (UniqueName: \"kubernetes.io/projected/eb679c27-8952-42f2-914e-13e4a969c408-kube-api-access-sgr8s\") pod \"apiserver-76f77b778f-6p499\" (UID: \"eb679c27-8952-42f2-914e-13e4a969c408\") " pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:09 crc kubenswrapper[4730]: I0221 00:09:09.998608 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.003048 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5hbl\" (UniqueName: \"kubernetes.io/projected/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-kube-api-access-f5hbl\") pod \"controller-manager-879f6c89f-b4h9p\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.014238 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5zwk\" (UniqueName: \"kubernetes.io/projected/c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0-kube-api-access-h5zwk\") pod \"authentication-operator-69f744f599-5mzr9\" (UID: \"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.021734 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.038723 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwbnr\" (UniqueName: \"kubernetes.io/projected/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-kube-api-access-zwbnr\") pod \"oauth-openshift-558db77b4-c2cjf\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.057632 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt9v7\" (UniqueName: \"kubernetes.io/projected/ba763110-213a-49ec-9385-e723b6a02fc8-kube-api-access-xt9v7\") pod \"downloads-7954f5f757-p7z7k\" (UID: \"ba763110-213a-49ec-9385-e723b6a02fc8\") " pod="openshift-console/downloads-7954f5f757-p7z7k" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.060617 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.070096 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.076376 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.090715 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xglkd\" (UniqueName: \"kubernetes.io/projected/d6be412a-8d96-40ac-967b-96ad68c91c1a-kube-api-access-xglkd\") pod \"openshift-config-operator-7777fb866f-2zz7r\" (UID: \"d6be412a-8d96-40ac-967b-96ad68c91c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.094931 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.106720 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6rxn\" (UniqueName: \"kubernetes.io/projected/1b8a1e85-beca-4608-b727-1ee293d094b2-kube-api-access-q6rxn\") pod \"openshift-apiserver-operator-796bbdcf4f-lq7m7\" (UID: \"1b8a1e85-beca-4608-b727-1ee293d094b2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.113260 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.133740 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/719ba80e-56f2-49ca-96ed-7f53a9159916-bound-sa-token\") pod \"ingress-operator-5b745b69d9-866vd\" (UID: \"719ba80e-56f2-49ca-96ed-7f53a9159916\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.157815 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr5rz\" (UniqueName: \"kubernetes.io/projected/3334b1ca-87b2-436d-a994-15634b8240e2-kube-api-access-tr5rz\") pod \"control-plane-machine-set-operator-78cbb6b69f-4mngz\" (UID: \"3334b1ca-87b2-436d-a994-15634b8240e2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.158419 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57bjg\" (UniqueName: \"kubernetes.io/projected/c34885d2-a418-472a-92fe-161f64359e5b-kube-api-access-57bjg\") pod \"multus-admission-controller-857f4d67dd-29g9v\" (UID: \"c34885d2-a418-472a-92fe-161f64359e5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-29g9v" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.170201 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.173793 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29527200-v2vdw" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.178812 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-692cl\" (UID: \"ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.179227 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.187788 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.197955 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-29g9v" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.203298 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3ca121a-9e67-478c-bbe4-484e36eb185f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9sjwc\" (UID: \"b3ca121a-9e67-478c-bbe4-484e36eb185f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.204600 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.214935 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9qlq\" (UniqueName: \"kubernetes.io/projected/49ec45fc-19c2-439f-b7cf-77fcfd764ae3-kube-api-access-r9qlq\") pod \"cluster-image-registry-operator-dc59b4c8b-2jp56\" (UID: \"49ec45fc-19c2-439f-b7cf-77fcfd764ae3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.224573 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.247601 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ttg62"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.251408 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49ec45fc-19c2-439f-b7cf-77fcfd764ae3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2jp56\" (UID: \"49ec45fc-19c2-439f-b7cf-77fcfd764ae3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.272550 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" Feb 21 00:09:10 crc kubenswrapper[4730]: W0221 00:09:10.273925 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20f37963_2f14_4031_ba6a_5a2b2908ed09.slice/crio-afba9ed1cb7cdde6f67452492080312bb5cb09e28604e5d09b905b4c579b89f9 WatchSource:0}: Error finding container afba9ed1cb7cdde6f67452492080312bb5cb09e28604e5d09b905b4c579b89f9: Status 404 returned error can't find the container with id afba9ed1cb7cdde6f67452492080312bb5cb09e28604e5d09b905b4c579b89f9 Feb 21 00:09:10 crc kubenswrapper[4730]: W0221 00:09:10.277505 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b7d4c04_0a49_410b_a682_b58b6b97a987.slice/crio-14968cefc488c72639d197559da96e23001a194042bed0e306f12b1d9b861917 WatchSource:0}: Error finding container 14968cefc488c72639d197559da96e23001a194042bed0e306f12b1d9b861917: Status 404 returned error can't find the container with id 14968cefc488c72639d197559da96e23001a194042bed0e306f12b1d9b861917 Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.278325 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjxb7\" (UniqueName: \"kubernetes.io/projected/dbd3805f-f503-444d-88c7-829832077376-kube-api-access-qjxb7\") pod \"router-default-5444994796-wxl66\" (UID: \"dbd3805f-f503-444d-88c7-829832077376\") " pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.282182 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.286590 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9x9n\" (UniqueName: \"kubernetes.io/projected/9eb7868f-81d4-4a88-99bb-807599f57b97-kube-api-access-n9x9n\") pod \"migrator-59844c95c7-l28vp\" (UID: \"9eb7868f-81d4-4a88-99bb-807599f57b97\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l28vp" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.287039 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-p7z7k" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.294674 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2wzh\" (UniqueName: \"kubernetes.io/projected/719ba80e-56f2-49ca-96ed-7f53a9159916-kube-api-access-c2wzh\") pod \"ingress-operator-5b745b69d9-866vd\" (UID: \"719ba80e-56f2-49ca-96ed-7f53a9159916\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.321719 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvkg7\" (UniqueName: \"kubernetes.io/projected/f9958c43-f3a7-4b80-ab5f-4a32607b9fd6-kube-api-access-vvkg7\") pod \"etcd-operator-b45778765-95lrd\" (UID: \"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.339706 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/108cc2af-75e9-47f2-a82e-03a59074bef8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kqglz\" (UID: \"108cc2af-75e9-47f2-a82e-03a59074bef8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.349474 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.358634 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgt2x\" (UniqueName: \"kubernetes.io/projected/b712f1fb-8e1e-43df-8cec-27a727d1fa4f-kube-api-access-wgt2x\") pod \"dns-operator-744455d44c-hltcd\" (UID: \"b712f1fb-8e1e-43df-8cec-27a727d1fa4f\") " pod="openshift-dns-operator/dns-operator-744455d44c-hltcd" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.388024 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbpnf\" (UniqueName: \"kubernetes.io/projected/ab327ef5-0e37-4dca-b380-5a0a3e1060af-kube-api-access-tbpnf\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxmlj\" (UID: \"ab327ef5-0e37-4dca-b380-5a0a3e1060af\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.396318 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.403219 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c2cjf"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.418657 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.419286 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-hltcd" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.424355 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.434747 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.442152 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.444017 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sqqlg"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.447646 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.459562 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9zdk\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-kube-api-access-g9zdk\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.459612 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8deb8736-a138-49f3-9550-060511014aaf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.459640 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.459661 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b301164-219c-45d4-ae7a-65efd88d3265-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zb52h\" (UID: \"1b301164-219c-45d4-ae7a-65efd88d3265\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.459697 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-bound-sa-token\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.459716 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b301164-219c-45d4-ae7a-65efd88d3265-proxy-tls\") pod \"machine-config-controller-84d6567774-zb52h\" (UID: \"1b301164-219c-45d4-ae7a-65efd88d3265\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.459743 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8deb8736-a138-49f3-9550-060511014aaf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.459783 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8deb8736-a138-49f3-9550-060511014aaf-trusted-ca\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.459800 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-registry-tls\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.459819 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8deb8736-a138-49f3-9550-060511014aaf-registry-certificates\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.459902 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gw8z\" (UniqueName: \"kubernetes.io/projected/1b301164-219c-45d4-ae7a-65efd88d3265-kube-api-access-6gw8z\") pod \"machine-config-controller-84d6567774-zb52h\" (UID: \"1b301164-219c-45d4-ae7a-65efd88d3265\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" Feb 21 00:09:10 crc kubenswrapper[4730]: E0221 00:09:10.460898 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:10.960865172 +0000 UTC m=+152.972432107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.463710 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.463913 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.488714 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l28vp" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.565738 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.565937 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34d07c9d-b8a5-4c5c-904b-680f85c759ec-config\") pod \"service-ca-operator-777779d784-pvqbb\" (UID: \"34d07c9d-b8a5-4c5c-904b-680f85c759ec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" Feb 21 00:09:10 crc kubenswrapper[4730]: E0221 00:09:10.566091 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:11.06604883 +0000 UTC m=+153.077615765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.566243 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d455554-2028-496c-a29a-94930265fec5-metrics-tls\") pod \"dns-default-kv84q\" (UID: \"1d455554-2028-496c-a29a-94930265fec5\") " pod="openshift-dns/dns-default-kv84q" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.566298 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f3fd734-80ae-4995-938e-6aa33d08265c-srv-cert\") pod \"olm-operator-6b444d44fb-vdnbn\" (UID: \"8f3fd734-80ae-4995-938e-6aa33d08265c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.566373 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-plugins-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.566401 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnlss\" (UniqueName: \"kubernetes.io/projected/780f4657-32ba-4755-b1ca-76fbb94ed7b8-kube-api-access-nnlss\") pod \"package-server-manager-789f6589d5-fbhhr\" (UID: \"780f4657-32ba-4755-b1ca-76fbb94ed7b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.566459 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0573adb9-e1f6-4518-a7ce-9ad70cf17705-cert\") pod \"ingress-canary-8thm5\" (UID: \"0573adb9-e1f6-4518-a7ce-9ad70cf17705\") " pod="openshift-ingress-canary/ingress-canary-8thm5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.566567 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c2db\" (UniqueName: \"kubernetes.io/projected/8f3fd734-80ae-4995-938e-6aa33d08265c-kube-api-access-9c2db\") pod \"olm-operator-6b444d44fb-vdnbn\" (UID: \"8f3fd734-80ae-4995-938e-6aa33d08265c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.566770 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b612b123-b5f2-40db-a3f9-70ca031cd3f5-node-bootstrap-token\") pod \"machine-config-server-kmpmr\" (UID: \"b612b123-b5f2-40db-a3f9-70ca031cd3f5\") " pod="openshift-machine-config-operator/machine-config-server-kmpmr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.567088 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gw8z\" (UniqueName: \"kubernetes.io/projected/1b301164-219c-45d4-ae7a-65efd88d3265-kube-api-access-6gw8z\") pod \"machine-config-controller-84d6567774-zb52h\" (UID: \"1b301164-219c-45d4-ae7a-65efd88d3265\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.567149 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/495e727f-2c66-441e-ae90-7e3dcf5e79ce-profile-collector-cert\") pod \"catalog-operator-68c6474976-g4j9p\" (UID: \"495e727f-2c66-441e-ae90-7e3dcf5e79ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.568631 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9zdk\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-kube-api-access-g9zdk\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.569194 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501-signing-cabundle\") pod \"service-ca-9c57cc56f-6sws4\" (UID: \"a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.569483 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8deb8736-a138-49f3-9550-060511014aaf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.570431 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.570500 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b301164-219c-45d4-ae7a-65efd88d3265-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zb52h\" (UID: \"1b301164-219c-45d4-ae7a-65efd88d3265\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.570624 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/afba91ef-7949-490e-9903-0751d7f84d27-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pt6nb\" (UID: \"afba91ef-7949-490e-9903-0751d7f84d27\") " pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.570699 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8deb8736-a138-49f3-9550-060511014aaf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: E0221 00:09:10.571418 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:11.071400394 +0000 UTC m=+153.082967329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.572629 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-registration-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.572779 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8jsk\" (UniqueName: \"kubernetes.io/projected/afba91ef-7949-490e-9903-0751d7f84d27-kube-api-access-g8jsk\") pod \"marketplace-operator-79b997595-pt6nb\" (UID: \"afba91ef-7949-490e-9903-0751d7f84d27\") " pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.572811 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34d07c9d-b8a5-4c5c-904b-680f85c759ec-serving-cert\") pod \"service-ca-operator-777779d784-pvqbb\" (UID: \"34d07c9d-b8a5-4c5c-904b-680f85c759ec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573016 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b765c1d8-ee90-4dd5-8f01-3a4f38366ee0-images\") pod \"machine-config-operator-74547568cd-fzk5w\" (UID: \"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573048 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-bound-sa-token\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573081 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/495e727f-2c66-441e-ae90-7e3dcf5e79ce-srv-cert\") pod \"catalog-operator-68c6474976-g4j9p\" (UID: \"495e727f-2c66-441e-ae90-7e3dcf5e79ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573101 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmdjn\" (UniqueName: \"kubernetes.io/projected/f9c5a444-f0b8-4ce4-869d-009eba2536e4-kube-api-access-vmdjn\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573151 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6df18100-1b19-4805-80c6-eb2b939c1248-apiservice-cert\") pod \"packageserver-d55dfcdfc-mpkt5\" (UID: \"6df18100-1b19-4805-80c6-eb2b939c1248\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573178 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89zcp\" (UniqueName: \"kubernetes.io/projected/1d455554-2028-496c-a29a-94930265fec5-kube-api-access-89zcp\") pod \"dns-default-kv84q\" (UID: \"1d455554-2028-496c-a29a-94930265fec5\") " pod="openshift-dns/dns-default-kv84q" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573359 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b301164-219c-45d4-ae7a-65efd88d3265-proxy-tls\") pod \"machine-config-controller-84d6567774-zb52h\" (UID: \"1b301164-219c-45d4-ae7a-65efd88d3265\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573467 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b4eb008-ae9f-48aa-82b2-48ef108daddd-secret-volume\") pod \"collect-profiles-29527200-zvsz9\" (UID: \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573537 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8deb8736-a138-49f3-9550-060511014aaf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573572 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b4eb008-ae9f-48aa-82b2-48ef108daddd-config-volume\") pod \"collect-profiles-29527200-zvsz9\" (UID: \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573592 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7v89\" (UniqueName: \"kubernetes.io/projected/9b4eb008-ae9f-48aa-82b2-48ef108daddd-kube-api-access-s7v89\") pod \"collect-profiles-29527200-zvsz9\" (UID: \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573688 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b301164-219c-45d4-ae7a-65efd88d3265-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zb52h\" (UID: \"1b301164-219c-45d4-ae7a-65efd88d3265\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573831 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx2kn\" (UniqueName: \"kubernetes.io/projected/a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501-kube-api-access-jx2kn\") pod \"service-ca-9c57cc56f-6sws4\" (UID: \"a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573911 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8deb8736-a138-49f3-9550-060511014aaf-trusted-ca\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.573930 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b612b123-b5f2-40db-a3f9-70ca031cd3f5-certs\") pod \"machine-config-server-kmpmr\" (UID: \"b612b123-b5f2-40db-a3f9-70ca031cd3f5\") " pod="openshift-machine-config-operator/machine-config-server-kmpmr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574037 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-socket-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574058 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/afba91ef-7949-490e-9903-0751d7f84d27-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pt6nb\" (UID: \"afba91ef-7949-490e-9903-0751d7f84d27\") " pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574094 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-registry-tls\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574357 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vbst\" (UniqueName: \"kubernetes.io/projected/b612b123-b5f2-40db-a3f9-70ca031cd3f5-kube-api-access-6vbst\") pod \"machine-config-server-kmpmr\" (UID: \"b612b123-b5f2-40db-a3f9-70ca031cd3f5\") " pod="openshift-machine-config-operator/machine-config-server-kmpmr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574445 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8deb8736-a138-49f3-9550-060511014aaf-registry-certificates\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574483 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-mountpoint-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574518 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f3fd734-80ae-4995-938e-6aa33d08265c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vdnbn\" (UID: \"8f3fd734-80ae-4995-938e-6aa33d08265c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574540 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b765c1d8-ee90-4dd5-8f01-3a4f38366ee0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fzk5w\" (UID: \"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574574 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6df18100-1b19-4805-80c6-eb2b939c1248-tmpfs\") pod \"packageserver-d55dfcdfc-mpkt5\" (UID: \"6df18100-1b19-4805-80c6-eb2b939c1248\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574608 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mtq2\" (UniqueName: \"kubernetes.io/projected/0573adb9-e1f6-4518-a7ce-9ad70cf17705-kube-api-access-9mtq2\") pod \"ingress-canary-8thm5\" (UID: \"0573adb9-e1f6-4518-a7ce-9ad70cf17705\") " pod="openshift-ingress-canary/ingress-canary-8thm5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574643 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h48bs\" (UniqueName: \"kubernetes.io/projected/34d07c9d-b8a5-4c5c-904b-680f85c759ec-kube-api-access-h48bs\") pod \"service-ca-operator-777779d784-pvqbb\" (UID: \"34d07c9d-b8a5-4c5c-904b-680f85c759ec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574661 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b765c1d8-ee90-4dd5-8f01-3a4f38366ee0-proxy-tls\") pod \"machine-config-operator-74547568cd-fzk5w\" (UID: \"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574679 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6df18100-1b19-4805-80c6-eb2b939c1248-webhook-cert\") pod \"packageserver-d55dfcdfc-mpkt5\" (UID: \"6df18100-1b19-4805-80c6-eb2b939c1248\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574728 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501-signing-key\") pod \"service-ca-9c57cc56f-6sws4\" (UID: \"a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574748 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l4zp\" (UniqueName: \"kubernetes.io/projected/b765c1d8-ee90-4dd5-8f01-3a4f38366ee0-kube-api-access-6l4zp\") pod \"machine-config-operator-74547568cd-fzk5w\" (UID: \"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574774 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d455554-2028-496c-a29a-94930265fec5-config-volume\") pod \"dns-default-kv84q\" (UID: \"1d455554-2028-496c-a29a-94930265fec5\") " pod="openshift-dns/dns-default-kv84q" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574803 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwkwq\" (UniqueName: \"kubernetes.io/projected/6df18100-1b19-4805-80c6-eb2b939c1248-kube-api-access-rwkwq\") pod \"packageserver-d55dfcdfc-mpkt5\" (UID: \"6df18100-1b19-4805-80c6-eb2b939c1248\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574850 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-csi-data-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574902 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt5s7\" (UniqueName: \"kubernetes.io/projected/495e727f-2c66-441e-ae90-7e3dcf5e79ce-kube-api-access-lt5s7\") pod \"catalog-operator-68c6474976-g4j9p\" (UID: \"495e727f-2c66-441e-ae90-7e3dcf5e79ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.574937 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/780f4657-32ba-4755-b1ca-76fbb94ed7b8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fbhhr\" (UID: \"780f4657-32ba-4755-b1ca-76fbb94ed7b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.586981 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8deb8736-a138-49f3-9550-060511014aaf-trusted-ca\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.595262 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.596619 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" event={"ID":"20f37963-2f14-4031-ba6a-5a2b2908ed09","Type":"ContainerStarted","Data":"afba9ed1cb7cdde6f67452492080312bb5cb09e28604e5d09b905b4c579b89f9"} Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.597164 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8deb8736-a138-49f3-9550-060511014aaf-registry-certificates\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.601055 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8deb8736-a138-49f3-9550-060511014aaf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.601161 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b301164-219c-45d4-ae7a-65efd88d3265-proxy-tls\") pod \"machine-config-controller-84d6567774-zb52h\" (UID: \"1b301164-219c-45d4-ae7a-65efd88d3265\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.601824 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-registry-tls\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.606972 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" event={"ID":"75d6599f-a7ce-4d05-a452-ebcb1a1fea65","Type":"ContainerStarted","Data":"4b7b5832c044db8e48f13bf3d9de9c544e6e9294b882529c8c1e43e45fe42e3a"} Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.608092 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gw8z\" (UniqueName: \"kubernetes.io/projected/1b301164-219c-45d4-ae7a-65efd88d3265-kube-api-access-6gw8z\") pod \"machine-config-controller-84d6567774-zb52h\" (UID: \"1b301164-219c-45d4-ae7a-65efd88d3265\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.608360 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm" event={"ID":"113599a7-fd8d-4bcb-9e1a-ce992776990a","Type":"ContainerStarted","Data":"f9568e831d1e2768ebbe79c5298aa017983b52cc00a26fd43492093b1960e366"} Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.609441 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ttg62" event={"ID":"5b7d4c04-0a49-410b-a682-b58b6b97a987","Type":"ContainerStarted","Data":"14968cefc488c72639d197559da96e23001a194042bed0e306f12b1d9b861917"} Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.611861 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" event={"ID":"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f","Type":"ContainerStarted","Data":"ac6793a9259b7c2a85214da44dd382c5dca4c6b39ec50801c1696c19bb063b17"} Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.611900 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" event={"ID":"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f","Type":"ContainerStarted","Data":"f23951f2d52a8a86078670e0406b3573319666f21dc838a629aed8b613294df4"} Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.617312 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sqqlg" event={"ID":"c4651f24-ec65-46fe-875a-7bf52568a045","Type":"ContainerStarted","Data":"dbb1cb2fa04acbc1d13f086f170763dadd0a6d48e05b20aa7969cfe840bde414"} Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.617972 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9zdk\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-kube-api-access-g9zdk\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.654852 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-bound-sa-token\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.679879 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680545 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-registration-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680579 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8jsk\" (UniqueName: \"kubernetes.io/projected/afba91ef-7949-490e-9903-0751d7f84d27-kube-api-access-g8jsk\") pod \"marketplace-operator-79b997595-pt6nb\" (UID: \"afba91ef-7949-490e-9903-0751d7f84d27\") " pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680599 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34d07c9d-b8a5-4c5c-904b-680f85c759ec-serving-cert\") pod \"service-ca-operator-777779d784-pvqbb\" (UID: \"34d07c9d-b8a5-4c5c-904b-680f85c759ec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680618 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b765c1d8-ee90-4dd5-8f01-3a4f38366ee0-images\") pod \"machine-config-operator-74547568cd-fzk5w\" (UID: \"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680633 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/495e727f-2c66-441e-ae90-7e3dcf5e79ce-srv-cert\") pod \"catalog-operator-68c6474976-g4j9p\" (UID: \"495e727f-2c66-441e-ae90-7e3dcf5e79ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680649 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmdjn\" (UniqueName: \"kubernetes.io/projected/f9c5a444-f0b8-4ce4-869d-009eba2536e4-kube-api-access-vmdjn\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680665 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6df18100-1b19-4805-80c6-eb2b939c1248-apiservice-cert\") pod \"packageserver-d55dfcdfc-mpkt5\" (UID: \"6df18100-1b19-4805-80c6-eb2b939c1248\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680699 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89zcp\" (UniqueName: \"kubernetes.io/projected/1d455554-2028-496c-a29a-94930265fec5-kube-api-access-89zcp\") pod \"dns-default-kv84q\" (UID: \"1d455554-2028-496c-a29a-94930265fec5\") " pod="openshift-dns/dns-default-kv84q" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680727 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b4eb008-ae9f-48aa-82b2-48ef108daddd-secret-volume\") pod \"collect-profiles-29527200-zvsz9\" (UID: \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680747 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b4eb008-ae9f-48aa-82b2-48ef108daddd-config-volume\") pod \"collect-profiles-29527200-zvsz9\" (UID: \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680762 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7v89\" (UniqueName: \"kubernetes.io/projected/9b4eb008-ae9f-48aa-82b2-48ef108daddd-kube-api-access-s7v89\") pod \"collect-profiles-29527200-zvsz9\" (UID: \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680780 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx2kn\" (UniqueName: \"kubernetes.io/projected/a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501-kube-api-access-jx2kn\") pod \"service-ca-9c57cc56f-6sws4\" (UID: \"a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680805 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b612b123-b5f2-40db-a3f9-70ca031cd3f5-certs\") pod \"machine-config-server-kmpmr\" (UID: \"b612b123-b5f2-40db-a3f9-70ca031cd3f5\") " pod="openshift-machine-config-operator/machine-config-server-kmpmr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680824 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-socket-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680843 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/afba91ef-7949-490e-9903-0751d7f84d27-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pt6nb\" (UID: \"afba91ef-7949-490e-9903-0751d7f84d27\") " pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680860 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vbst\" (UniqueName: \"kubernetes.io/projected/b612b123-b5f2-40db-a3f9-70ca031cd3f5-kube-api-access-6vbst\") pod \"machine-config-server-kmpmr\" (UID: \"b612b123-b5f2-40db-a3f9-70ca031cd3f5\") " pod="openshift-machine-config-operator/machine-config-server-kmpmr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680881 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-mountpoint-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680897 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f3fd734-80ae-4995-938e-6aa33d08265c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vdnbn\" (UID: \"8f3fd734-80ae-4995-938e-6aa33d08265c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680913 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b765c1d8-ee90-4dd5-8f01-3a4f38366ee0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fzk5w\" (UID: \"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680933 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6df18100-1b19-4805-80c6-eb2b939c1248-tmpfs\") pod \"packageserver-d55dfcdfc-mpkt5\" (UID: \"6df18100-1b19-4805-80c6-eb2b939c1248\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680970 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mtq2\" (UniqueName: \"kubernetes.io/projected/0573adb9-e1f6-4518-a7ce-9ad70cf17705-kube-api-access-9mtq2\") pod \"ingress-canary-8thm5\" (UID: \"0573adb9-e1f6-4518-a7ce-9ad70cf17705\") " pod="openshift-ingress-canary/ingress-canary-8thm5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.680990 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h48bs\" (UniqueName: \"kubernetes.io/projected/34d07c9d-b8a5-4c5c-904b-680f85c759ec-kube-api-access-h48bs\") pod \"service-ca-operator-777779d784-pvqbb\" (UID: \"34d07c9d-b8a5-4c5c-904b-680f85c759ec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681006 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b765c1d8-ee90-4dd5-8f01-3a4f38366ee0-proxy-tls\") pod \"machine-config-operator-74547568cd-fzk5w\" (UID: \"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681023 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6df18100-1b19-4805-80c6-eb2b939c1248-webhook-cert\") pod \"packageserver-d55dfcdfc-mpkt5\" (UID: \"6df18100-1b19-4805-80c6-eb2b939c1248\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681041 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501-signing-key\") pod \"service-ca-9c57cc56f-6sws4\" (UID: \"a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681057 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l4zp\" (UniqueName: \"kubernetes.io/projected/b765c1d8-ee90-4dd5-8f01-3a4f38366ee0-kube-api-access-6l4zp\") pod \"machine-config-operator-74547568cd-fzk5w\" (UID: \"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681072 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d455554-2028-496c-a29a-94930265fec5-config-volume\") pod \"dns-default-kv84q\" (UID: \"1d455554-2028-496c-a29a-94930265fec5\") " pod="openshift-dns/dns-default-kv84q" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681087 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-csi-data-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681104 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwkwq\" (UniqueName: \"kubernetes.io/projected/6df18100-1b19-4805-80c6-eb2b939c1248-kube-api-access-rwkwq\") pod \"packageserver-d55dfcdfc-mpkt5\" (UID: \"6df18100-1b19-4805-80c6-eb2b939c1248\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681122 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt5s7\" (UniqueName: \"kubernetes.io/projected/495e727f-2c66-441e-ae90-7e3dcf5e79ce-kube-api-access-lt5s7\") pod \"catalog-operator-68c6474976-g4j9p\" (UID: \"495e727f-2c66-441e-ae90-7e3dcf5e79ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681139 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/780f4657-32ba-4755-b1ca-76fbb94ed7b8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fbhhr\" (UID: \"780f4657-32ba-4755-b1ca-76fbb94ed7b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681155 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34d07c9d-b8a5-4c5c-904b-680f85c759ec-config\") pod \"service-ca-operator-777779d784-pvqbb\" (UID: \"34d07c9d-b8a5-4c5c-904b-680f85c759ec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681171 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d455554-2028-496c-a29a-94930265fec5-metrics-tls\") pod \"dns-default-kv84q\" (UID: \"1d455554-2028-496c-a29a-94930265fec5\") " pod="openshift-dns/dns-default-kv84q" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681187 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-plugins-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681205 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnlss\" (UniqueName: \"kubernetes.io/projected/780f4657-32ba-4755-b1ca-76fbb94ed7b8-kube-api-access-nnlss\") pod \"package-server-manager-789f6589d5-fbhhr\" (UID: \"780f4657-32ba-4755-b1ca-76fbb94ed7b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681220 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f3fd734-80ae-4995-938e-6aa33d08265c-srv-cert\") pod \"olm-operator-6b444d44fb-vdnbn\" (UID: \"8f3fd734-80ae-4995-938e-6aa33d08265c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681239 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0573adb9-e1f6-4518-a7ce-9ad70cf17705-cert\") pod \"ingress-canary-8thm5\" (UID: \"0573adb9-e1f6-4518-a7ce-9ad70cf17705\") " pod="openshift-ingress-canary/ingress-canary-8thm5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681263 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c2db\" (UniqueName: \"kubernetes.io/projected/8f3fd734-80ae-4995-938e-6aa33d08265c-kube-api-access-9c2db\") pod \"olm-operator-6b444d44fb-vdnbn\" (UID: \"8f3fd734-80ae-4995-938e-6aa33d08265c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681279 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b612b123-b5f2-40db-a3f9-70ca031cd3f5-node-bootstrap-token\") pod \"machine-config-server-kmpmr\" (UID: \"b612b123-b5f2-40db-a3f9-70ca031cd3f5\") " pod="openshift-machine-config-operator/machine-config-server-kmpmr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681308 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/495e727f-2c66-441e-ae90-7e3dcf5e79ce-profile-collector-cert\") pod \"catalog-operator-68c6474976-g4j9p\" (UID: \"495e727f-2c66-441e-ae90-7e3dcf5e79ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681326 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501-signing-cabundle\") pod \"service-ca-9c57cc56f-6sws4\" (UID: \"a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.681355 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/afba91ef-7949-490e-9903-0751d7f84d27-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pt6nb\" (UID: \"afba91ef-7949-490e-9903-0751d7f84d27\") " pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.682710 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b765c1d8-ee90-4dd5-8f01-3a4f38366ee0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fzk5w\" (UID: \"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:10 crc kubenswrapper[4730]: E0221 00:09:10.682818 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:11.182801045 +0000 UTC m=+153.194368080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.683174 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-registration-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.684218 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/afba91ef-7949-490e-9903-0751d7f84d27-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pt6nb\" (UID: \"afba91ef-7949-490e-9903-0751d7f84d27\") " pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.684525 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6df18100-1b19-4805-80c6-eb2b939c1248-tmpfs\") pod \"packageserver-d55dfcdfc-mpkt5\" (UID: \"6df18100-1b19-4805-80c6-eb2b939c1248\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.688391 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d455554-2028-496c-a29a-94930265fec5-config-volume\") pod \"dns-default-kv84q\" (UID: \"1d455554-2028-496c-a29a-94930265fec5\") " pod="openshift-dns/dns-default-kv84q" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.690887 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b765c1d8-ee90-4dd5-8f01-3a4f38366ee0-images\") pod \"machine-config-operator-74547568cd-fzk5w\" (UID: \"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.691306 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-csi-data-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.691359 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-plugins-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.691544 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34d07c9d-b8a5-4c5c-904b-680f85c759ec-serving-cert\") pod \"service-ca-operator-777779d784-pvqbb\" (UID: \"34d07c9d-b8a5-4c5c-904b-680f85c759ec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.691655 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-mountpoint-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.691680 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b765c1d8-ee90-4dd5-8f01-3a4f38366ee0-proxy-tls\") pod \"machine-config-operator-74547568cd-fzk5w\" (UID: \"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.692750 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b4eb008-ae9f-48aa-82b2-48ef108daddd-config-volume\") pod \"collect-profiles-29527200-zvsz9\" (UID: \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.692982 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501-signing-cabundle\") pod \"service-ca-9c57cc56f-6sws4\" (UID: \"a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.693085 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f9c5a444-f0b8-4ce4-869d-009eba2536e4-socket-dir\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.693133 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34d07c9d-b8a5-4c5c-904b-680f85c759ec-config\") pod \"service-ca-operator-777779d784-pvqbb\" (UID: \"34d07c9d-b8a5-4c5c-904b-680f85c759ec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.696287 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/495e727f-2c66-441e-ae90-7e3dcf5e79ce-profile-collector-cert\") pod \"catalog-operator-68c6474976-g4j9p\" (UID: \"495e727f-2c66-441e-ae90-7e3dcf5e79ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.696292 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f3fd734-80ae-4995-938e-6aa33d08265c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vdnbn\" (UID: \"8f3fd734-80ae-4995-938e-6aa33d08265c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.696344 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/780f4657-32ba-4755-b1ca-76fbb94ed7b8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fbhhr\" (UID: \"780f4657-32ba-4755-b1ca-76fbb94ed7b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.703574 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b612b123-b5f2-40db-a3f9-70ca031cd3f5-node-bootstrap-token\") pod \"machine-config-server-kmpmr\" (UID: \"b612b123-b5f2-40db-a3f9-70ca031cd3f5\") " pod="openshift-machine-config-operator/machine-config-server-kmpmr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.704238 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b4eb008-ae9f-48aa-82b2-48ef108daddd-secret-volume\") pod \"collect-profiles-29527200-zvsz9\" (UID: \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.704467 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/495e727f-2c66-441e-ae90-7e3dcf5e79ce-srv-cert\") pod \"catalog-operator-68c6474976-g4j9p\" (UID: \"495e727f-2c66-441e-ae90-7e3dcf5e79ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.704811 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b612b123-b5f2-40db-a3f9-70ca031cd3f5-certs\") pod \"machine-config-server-kmpmr\" (UID: \"b612b123-b5f2-40db-a3f9-70ca031cd3f5\") " pod="openshift-machine-config-operator/machine-config-server-kmpmr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.706185 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f3fd734-80ae-4995-938e-6aa33d08265c-srv-cert\") pod \"olm-operator-6b444d44fb-vdnbn\" (UID: \"8f3fd734-80ae-4995-938e-6aa33d08265c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.706389 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6df18100-1b19-4805-80c6-eb2b939c1248-webhook-cert\") pod \"packageserver-d55dfcdfc-mpkt5\" (UID: \"6df18100-1b19-4805-80c6-eb2b939c1248\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.708484 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0573adb9-e1f6-4518-a7ce-9ad70cf17705-cert\") pod \"ingress-canary-8thm5\" (UID: \"0573adb9-e1f6-4518-a7ce-9ad70cf17705\") " pod="openshift-ingress-canary/ingress-canary-8thm5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.708519 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/afba91ef-7949-490e-9903-0751d7f84d27-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pt6nb\" (UID: \"afba91ef-7949-490e-9903-0751d7f84d27\") " pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.713821 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d455554-2028-496c-a29a-94930265fec5-metrics-tls\") pod \"dns-default-kv84q\" (UID: \"1d455554-2028-496c-a29a-94930265fec5\") " pod="openshift-dns/dns-default-kv84q" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.714665 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501-signing-key\") pod \"service-ca-9c57cc56f-6sws4\" (UID: \"a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.728439 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6df18100-1b19-4805-80c6-eb2b939c1248-apiservice-cert\") pod \"packageserver-d55dfcdfc-mpkt5\" (UID: \"6df18100-1b19-4805-80c6-eb2b939c1248\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.735817 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8jsk\" (UniqueName: \"kubernetes.io/projected/afba91ef-7949-490e-9903-0751d7f84d27-kube-api-access-g8jsk\") pod \"marketplace-operator-79b997595-pt6nb\" (UID: \"afba91ef-7949-490e-9903-0751d7f84d27\") " pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.736818 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.761230 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b4h9p"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.762596 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5mzr9"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.763971 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-p7z7k"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.765085 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z7gds"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.772499 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.781111 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h48bs\" (UniqueName: \"kubernetes.io/projected/34d07c9d-b8a5-4c5c-904b-680f85c759ec-kube-api-access-h48bs\") pod \"service-ca-operator-777779d784-pvqbb\" (UID: \"34d07c9d-b8a5-4c5c-904b-680f85c759ec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.781440 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mtq2\" (UniqueName: \"kubernetes.io/projected/0573adb9-e1f6-4518-a7ce-9ad70cf17705-kube-api-access-9mtq2\") pod \"ingress-canary-8thm5\" (UID: \"0573adb9-e1f6-4518-a7ce-9ad70cf17705\") " pod="openshift-ingress-canary/ingress-canary-8thm5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.782594 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: E0221 00:09:10.782982 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:11.282966388 +0000 UTC m=+153.294533313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.799928 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.822855 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwkwq\" (UniqueName: \"kubernetes.io/projected/6df18100-1b19-4805-80c6-eb2b939c1248-kube-api-access-rwkwq\") pod \"packageserver-d55dfcdfc-mpkt5\" (UID: \"6df18100-1b19-4805-80c6-eb2b939c1248\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.826283 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.826541 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l4zp\" (UniqueName: \"kubernetes.io/projected/b765c1d8-ee90-4dd5-8f01-3a4f38366ee0-kube-api-access-6l4zp\") pod \"machine-config-operator-74547568cd-fzk5w\" (UID: \"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.836220 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.839964 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx2kn\" (UniqueName: \"kubernetes.io/projected/a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501-kube-api-access-jx2kn\") pod \"service-ca-9c57cc56f-6sws4\" (UID: \"a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501\") " pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.852113 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.855870 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29527200-v2vdw"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.857722 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-29g9v"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.860568 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6p499"] Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.883661 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:10 crc kubenswrapper[4730]: E0221 00:09:10.883768 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:11.383748915 +0000 UTC m=+153.395315850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.884004 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:10 crc kubenswrapper[4730]: E0221 00:09:10.884300 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:11.384293657 +0000 UTC m=+153.395860592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.884961 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmdjn\" (UniqueName: \"kubernetes.io/projected/f9c5a444-f0b8-4ce4-869d-009eba2536e4-kube-api-access-vmdjn\") pod \"csi-hostpathplugin-87wrr\" (UID: \"f9c5a444-f0b8-4ce4-869d-009eba2536e4\") " pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.897128 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vbst\" (UniqueName: \"kubernetes.io/projected/b612b123-b5f2-40db-a3f9-70ca031cd3f5-kube-api-access-6vbst\") pod \"machine-config-server-kmpmr\" (UID: \"b612b123-b5f2-40db-a3f9-70ca031cd3f5\") " pod="openshift-machine-config-operator/machine-config-server-kmpmr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.899865 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.913091 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8thm5" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.916810 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89zcp\" (UniqueName: \"kubernetes.io/projected/1d455554-2028-496c-a29a-94930265fec5-kube-api-access-89zcp\") pod \"dns-default-kv84q\" (UID: \"1d455554-2028-496c-a29a-94930265fec5\") " pod="openshift-dns/dns-default-kv84q" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.919847 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt5s7\" (UniqueName: \"kubernetes.io/projected/495e727f-2c66-441e-ae90-7e3dcf5e79ce-kube-api-access-lt5s7\") pod \"catalog-operator-68c6474976-g4j9p\" (UID: \"495e727f-2c66-441e-ae90-7e3dcf5e79ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.949200 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7v89\" (UniqueName: \"kubernetes.io/projected/9b4eb008-ae9f-48aa-82b2-48ef108daddd-kube-api-access-s7v89\") pod \"collect-profiles-29527200-zvsz9\" (UID: \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.953433 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c2db\" (UniqueName: \"kubernetes.io/projected/8f3fd734-80ae-4995-938e-6aa33d08265c-kube-api-access-9c2db\") pod \"olm-operator-6b444d44fb-vdnbn\" (UID: \"8f3fd734-80ae-4995-938e-6aa33d08265c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.975589 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnlss\" (UniqueName: \"kubernetes.io/projected/780f4657-32ba-4755-b1ca-76fbb94ed7b8-kube-api-access-nnlss\") pod \"package-server-manager-789f6589d5-fbhhr\" (UID: \"780f4657-32ba-4755-b1ca-76fbb94ed7b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" Feb 21 00:09:10 crc kubenswrapper[4730]: I0221 00:09:10.995262 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:10 crc kubenswrapper[4730]: E0221 00:09:10.995615 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:11.495595787 +0000 UTC m=+153.507162722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.096313 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:11 crc kubenswrapper[4730]: E0221 00:09:11.096685 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:11.59667299 +0000 UTC m=+153.608239925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.104010 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.110446 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.116935 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.124680 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.171769 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-87wrr" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.172967 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.182090 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kmpmr" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.189993 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.197617 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:11 crc kubenswrapper[4730]: E0221 00:09:11.198022 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:11.698004309 +0000 UTC m=+153.709571244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.206428 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kv84q" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.299755 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.299936 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hltcd"] Feb 21 00:09:11 crc kubenswrapper[4730]: E0221 00:09:11.300691 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:11.800675969 +0000 UTC m=+153.812242904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.313706 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r"] Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.344430 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl"] Feb 21 00:09:11 crc kubenswrapper[4730]: W0221 00:09:11.369463 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6be412a_8d96_40ac_967b_96ad68c91c1a.slice/crio-ffaae01fa357240771f0eb3e10c681bec394b98fead284f8f45bd8b4fa158155 WatchSource:0}: Error finding container ffaae01fa357240771f0eb3e10c681bec394b98fead284f8f45bd8b4fa158155: Status 404 returned error can't find the container with id ffaae01fa357240771f0eb3e10c681bec394b98fead284f8f45bd8b4fa158155 Feb 21 00:09:11 crc kubenswrapper[4730]: W0221 00:09:11.372554 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb712f1fb_8e1e_43df_8cec_27a727d1fa4f.slice/crio-e9e06346f14eb692257918b1c9e9827c6414a3dea71828577976d912f67fdf04 WatchSource:0}: Error finding container e9e06346f14eb692257918b1c9e9827c6414a3dea71828577976d912f67fdf04: Status 404 returned error can't find the container with id e9e06346f14eb692257918b1c9e9827c6414a3dea71828577976d912f67fdf04 Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.401537 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:11 crc kubenswrapper[4730]: E0221 00:09:11.401898 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:11.901880856 +0000 UTC m=+153.913447791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.407063 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj"] Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.418865 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz"] Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.448465 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56"] Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.455072 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-95lrd"] Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.492102 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l28vp"] Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.503356 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:11 crc kubenswrapper[4730]: E0221 00:09:11.503675 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:12.003659364 +0000 UTC m=+154.015226299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.506271 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc"] Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.516426 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb"] Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.517598 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pt6nb"] Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.529311 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6sws4"] Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.544523 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h"] Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.559438 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-866vd"] Feb 21 00:09:11 crc kubenswrapper[4730]: W0221 00:09:11.576872 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9eb7868f_81d4_4a88_99bb_807599f57b97.slice/crio-f27ccef5f2cc92372b2064ea3b74d12a9d89e9d96f29f0c19d983aa9f5b86225 WatchSource:0}: Error finding container f27ccef5f2cc92372b2064ea3b74d12a9d89e9d96f29f0c19d983aa9f5b86225: Status 404 returned error can't find the container with id f27ccef5f2cc92372b2064ea3b74d12a9d89e9d96f29f0c19d983aa9f5b86225 Feb 21 00:09:11 crc kubenswrapper[4730]: W0221 00:09:11.585665 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3ca121a_9e67_478c_bbe4_484e36eb185f.slice/crio-59446ace281241c52c94fabd9054993fe62f063d678ef94b5b073f5bdbd59de3 WatchSource:0}: Error finding container 59446ace281241c52c94fabd9054993fe62f063d678ef94b5b073f5bdbd59de3: Status 404 returned error can't find the container with id 59446ace281241c52c94fabd9054993fe62f063d678ef94b5b073f5bdbd59de3 Feb 21 00:09:11 crc kubenswrapper[4730]: W0221 00:09:11.595264 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49ec45fc_19c2_439f_b7cf_77fcfd764ae3.slice/crio-000ea4521f356f8949515cce1bb5dfff6521c19d74897ad0dac907a55d9e02dd WatchSource:0}: Error finding container 000ea4521f356f8949515cce1bb5dfff6521c19d74897ad0dac907a55d9e02dd: Status 404 returned error can't find the container with id 000ea4521f356f8949515cce1bb5dfff6521c19d74897ad0dac907a55d9e02dd Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.604594 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:11 crc kubenswrapper[4730]: E0221 00:09:11.605089 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:12.105069346 +0000 UTC m=+154.116636281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.639019 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" event={"ID":"44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6","Type":"ContainerStarted","Data":"b21c1ad440a18bc75b4359713e02507f194e91c0b173407b4032ba904eea5add"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.639089 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" event={"ID":"44276c56-9aa7-4cb4-86d9-0a6c7af7d5e6","Type":"ContainerStarted","Data":"d1df6db69054913b793cb1496bb0b1b3c88a02429a3fab8c0ee2319f39d8a665"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.641510 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" event={"ID":"1b8a1e85-beca-4608-b727-1ee293d094b2","Type":"ContainerStarted","Data":"6ce732b0c07d2c601887fed7e47babef3f558a4129d3ddbaff9dc55d1cd4ceec"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.647544 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" event={"ID":"c4a5e192-cdaf-4dda-9aa9-93d2d5ced68f","Type":"ContainerStarted","Data":"1d96a6f7ed0a874bd7fedd65a3048e8780ce833599ad625597eaf9340bcdd7de"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.651847 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" event={"ID":"ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd","Type":"ContainerStarted","Data":"a120d35c2cf8e65101e1b07c672b0c771c2976a19548e523ff042546defae03e"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.652930 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hltcd" event={"ID":"b712f1fb-8e1e-43df-8cec-27a727d1fa4f","Type":"ContainerStarted","Data":"e9e06346f14eb692257918b1c9e9827c6414a3dea71828577976d912f67fdf04"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.653892 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8thm5"] Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.654189 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-p7z7k" event={"ID":"ba763110-213a-49ec-9385-e723b6a02fc8","Type":"ContainerStarted","Data":"7a9057c0a4f24bd231c318a4bad9f6d4e8714ced65009e8c8ff0b0b9c553b01c"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.659267 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" event={"ID":"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6","Type":"ContainerStarted","Data":"f15d63aed74312c41d00c818e7ed06c0df51a10d98d28c15bfe09c3a5a957ecc"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.661789 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-29g9v" event={"ID":"c34885d2-a418-472a-92fe-161f64359e5b","Type":"ContainerStarted","Data":"7e8c10e814d32d132ba5e89f6ca772303537d4c39df5e8cb0becf1be9d1becd0"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.664738 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sqqlg" event={"ID":"c4651f24-ec65-46fe-875a-7bf52568a045","Type":"ContainerStarted","Data":"bcbccfb6ee58e03d7d9f4b89ee15cd0e41532ebaf377ad9a2df881e91b3250bb"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.665634 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.667347 4730 patch_prober.go:28] interesting pod/console-operator-58897d9998-sqqlg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.667376 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" event={"ID":"75d6599f-a7ce-4d05-a452-ebcb1a1fea65","Type":"ContainerStarted","Data":"98579cef4cd8f4a928a87a2f3c8420c484ce4f500c58e48a744a1a0f1129e845"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.667415 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sqqlg" podUID="c4651f24-ec65-46fe-875a-7bf52568a045" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.668040 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.668454 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" event={"ID":"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0","Type":"ContainerStarted","Data":"484bcef012331dc318cc2a8c32a7350d480d917eb14b188f70e228c6ed0e75ff"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.669173 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6p499" event={"ID":"eb679c27-8952-42f2-914e-13e4a969c408","Type":"ContainerStarted","Data":"357de7381c90fac830cc6de24f6033ede20abc725e11522e28142bbb465db5b0"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.672234 4730 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c2cjf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" start-of-body= Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.672269 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" podUID="75d6599f-a7ce-4d05-a452-ebcb1a1fea65" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.672398 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wxl66" event={"ID":"dbd3805f-f503-444d-88c7-829832077376","Type":"ContainerStarted","Data":"8faeffecc6ff702ee0a1223566dab01fcbdb020c47046f2ee72ad6b4abf7ddcd"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.672425 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wxl66" event={"ID":"dbd3805f-f503-444d-88c7-829832077376","Type":"ContainerStarted","Data":"6998107d465e716e94e6b8397cc0864ad37cd3a73a060e1f228196a31ee728e9"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.675032 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" event={"ID":"1b301164-219c-45d4-ae7a-65efd88d3265","Type":"ContainerStarted","Data":"76e0b8b02c66a97353486cfd42e5a0edf7bdcfd9f61b91b0cc74843811b7989f"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.678561 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" event={"ID":"b3ca121a-9e67-478c-bbe4-484e36eb185f","Type":"ContainerStarted","Data":"59446ace281241c52c94fabd9054993fe62f063d678ef94b5b073f5bdbd59de3"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.679514 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" event={"ID":"34d07c9d-b8a5-4c5c-904b-680f85c759ec","Type":"ContainerStarted","Data":"55816d2c7fc0350cf267d7f3219f62a1db2b262a3d618e1743fb15304eb1b694"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.680228 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz" event={"ID":"3334b1ca-87b2-436d-a994-15634b8240e2","Type":"ContainerStarted","Data":"8f353f0a330510ae61f408eef3cdf2941551644ca643bc90e7a2d6d98d04bbfb"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.682187 4730 generic.go:334] "Generic (PLEG): container finished" podID="20f37963-2f14-4031-ba6a-5a2b2908ed09" containerID="27ad585999d1431f0673da042e3600c7ee85355bc20f623f0fcb15841cc829e6" exitCode=0 Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.682253 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" event={"ID":"20f37963-2f14-4031-ba6a-5a2b2908ed09","Type":"ContainerDied","Data":"27ad585999d1431f0673da042e3600c7ee85355bc20f623f0fcb15841cc829e6"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.684634 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" event={"ID":"cf6bdbe0-9c57-4476-99f4-e837b5277f1a","Type":"ContainerStarted","Data":"e8824d5c11f50cb378e085073ba5f430d8c879aaab5f3546eb9ac380afc6e0c5"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.685815 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l28vp" event={"ID":"9eb7868f-81d4-4a88-99bb-807599f57b97","Type":"ContainerStarted","Data":"f27ccef5f2cc92372b2064ea3b74d12a9d89e9d96f29f0c19d983aa9f5b86225"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.687533 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29527200-v2vdw" event={"ID":"bdfac2f0-2a56-43ab-b248-a9c523e41856","Type":"ContainerStarted","Data":"fbf23865a6e0a8528831b9a994ee2a32ab6cdb4dfeb916a1f7c50a49f381eed6"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.689778 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm" event={"ID":"113599a7-fd8d-4bcb-9e1a-ce992776990a","Type":"ContainerStarted","Data":"44d7b310ffed1cd08dfe249476cffbbe6706b53f3eab9f2db8655511a1a51591"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.692093 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" event={"ID":"afba91ef-7949-490e-9903-0751d7f84d27","Type":"ContainerStarted","Data":"c39aa8a9b89f48b525a6971b911f85436a061489c07784ca7ee799f5569e9844"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.694541 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" event={"ID":"d6be412a-8d96-40ac-967b-96ad68c91c1a","Type":"ContainerStarted","Data":"ffaae01fa357240771f0eb3e10c681bec394b98fead284f8f45bd8b4fa158155"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.697222 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" event={"ID":"81ed0256-3be9-4994-85ee-f35c6be1bf63","Type":"ContainerStarted","Data":"ade2ccd84efe07e5fcf9b736a23b32d59ff8f3a5d613b3679caa0989cd1eea16"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.698452 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" event={"ID":"ab327ef5-0e37-4dca-b380-5a0a3e1060af","Type":"ContainerStarted","Data":"fe3bc797ee40b8edc5b3ab111a4cb081aefeb45ba822a0e5fbc9a09238da3013"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.699525 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" event={"ID":"49ec45fc-19c2-439f-b7cf-77fcfd764ae3","Type":"ContainerStarted","Data":"000ea4521f356f8949515cce1bb5dfff6521c19d74897ad0dac907a55d9e02dd"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.705607 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:11 crc kubenswrapper[4730]: E0221 00:09:11.706736 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:12.206724132 +0000 UTC m=+154.218291057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.707540 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" event={"ID":"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef","Type":"ContainerStarted","Data":"bef012223b32af3200739cbf91c1c3ad94d7f1a658951f5d099eee6f85bfe144"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.711454 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" event={"ID":"108cc2af-75e9-47f2-a82e-03a59074bef8","Type":"ContainerStarted","Data":"4d9ffd86ede7227d4c085f053c5305db24f882c1c5adde74811ca666415abad5"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.720336 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ttg62" event={"ID":"5b7d4c04-0a49-410b-a682-b58b6b97a987","Type":"ContainerStarted","Data":"81d4c384ed15ff888c75d4dfbb2f2b58ac1a44aabb27f8d919ec21281a3ad4f8"} Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.807337 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:11 crc kubenswrapper[4730]: E0221 00:09:11.809337 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:12.309321671 +0000 UTC m=+154.320888606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.850507 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p"] Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.913742 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:11 crc kubenswrapper[4730]: E0221 00:09:11.915516 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:12.415495953 +0000 UTC m=+154.427062888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:11 crc kubenswrapper[4730]: I0221 00:09:11.934966 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn"] Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.010774 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5"] Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.018660 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.019099 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:12.519083934 +0000 UTC m=+154.530650869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.062553 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w"] Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.120571 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.121118 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:12.621102988 +0000 UTC m=+154.632669923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.143492 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kv84q"] Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.145188 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9"] Feb 21 00:09:12 crc kubenswrapper[4730]: W0221 00:09:12.154633 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod495e727f_2c66_441e_ae90_7e3dcf5e79ce.slice/crio-9471b03853dbdd0bad2487c4c6801498cc4f08a470c101546d2c96f53327e4b6 WatchSource:0}: Error finding container 9471b03853dbdd0bad2487c4c6801498cc4f08a470c101546d2c96f53327e4b6: Status 404 returned error can't find the container with id 9471b03853dbdd0bad2487c4c6801498cc4f08a470c101546d2c96f53327e4b6 Feb 21 00:09:12 crc kubenswrapper[4730]: W0221 00:09:12.159150 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f3fd734_80ae_4995_938e_6aa33d08265c.slice/crio-35b782272c46aec70d7de945b7ee5b87182298d4831881fa1e92d4f720536d00 WatchSource:0}: Error finding container 35b782272c46aec70d7de945b7ee5b87182298d4831881fa1e92d4f720536d00: Status 404 returned error can't find the container with id 35b782272c46aec70d7de945b7ee5b87182298d4831881fa1e92d4f720536d00 Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.199335 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr"] Feb 21 00:09:12 crc kubenswrapper[4730]: W0221 00:09:12.212679 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb765c1d8_ee90_4dd5_8f01_3a4f38366ee0.slice/crio-359b21437ce472749b1c63fe5399b87293f140b1d22bc1a60a37160eb0761826 WatchSource:0}: Error finding container 359b21437ce472749b1c63fe5399b87293f140b1d22bc1a60a37160eb0761826: Status 404 returned error can't find the container with id 359b21437ce472749b1c63fe5399b87293f140b1d22bc1a60a37160eb0761826 Feb 21 00:09:12 crc kubenswrapper[4730]: W0221 00:09:12.218275 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d455554_2028_496c_a29a_94930265fec5.slice/crio-58f5979c55294dbc120179aea652e0d9f69e2288ff49de8a3f378f58bfc8b483 WatchSource:0}: Error finding container 58f5979c55294dbc120179aea652e0d9f69e2288ff49de8a3f378f58bfc8b483: Status 404 returned error can't find the container with id 58f5979c55294dbc120179aea652e0d9f69e2288ff49de8a3f378f58bfc8b483 Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.222297 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.222445 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:12.722418338 +0000 UTC m=+154.733985273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.222549 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.223192 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:12.723179685 +0000 UTC m=+154.734746620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.235824 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-87wrr"] Feb 21 00:09:12 crc kubenswrapper[4730]: W0221 00:09:12.273100 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb612b123_b5f2_40db_a3f9_70ca031cd3f5.slice/crio-bb146726936799d1bb4ffe58e7e96df884e71ea822d082a828d669ccbad0f6d1 WatchSource:0}: Error finding container bb146726936799d1bb4ffe58e7e96df884e71ea822d082a828d669ccbad0f6d1: Status 404 returned error can't find the container with id bb146726936799d1bb4ffe58e7e96df884e71ea822d082a828d669ccbad0f6d1 Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.325589 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.325790 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:12.825758184 +0000 UTC m=+154.837325119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.326231 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.326776 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:12.826757906 +0000 UTC m=+154.838324841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.419195 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.425061 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.425115 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.427896 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.428136 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:12.928092086 +0000 UTC m=+154.939659021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.428417 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.428897 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:12.928877183 +0000 UTC m=+154.940444118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.492526 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wxl66" podStartSLOduration=133.492509013 podStartE2EDuration="2m13.492509013s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:12.491798396 +0000 UTC m=+154.503365341" watchObservedRunningTime="2026-02-21 00:09:12.492509013 +0000 UTC m=+154.504075948" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.492749 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-sqqlg" podStartSLOduration=133.492742708 podStartE2EDuration="2m13.492742708s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:12.452044979 +0000 UTC m=+154.463611924" watchObservedRunningTime="2026-02-21 00:09:12.492742708 +0000 UTC m=+154.504309643" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.532013 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.532106 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:13.032092456 +0000 UTC m=+155.043659391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.532857 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.533217 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:13.033209092 +0000 UTC m=+155.044776027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.533693 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5djw" podStartSLOduration=134.533655912 podStartE2EDuration="2m14.533655912s" podCreationTimestamp="2026-02-21 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:12.532784952 +0000 UTC m=+154.544351897" watchObservedRunningTime="2026-02-21 00:09:12.533655912 +0000 UTC m=+154.545222847" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.609747 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" podStartSLOduration=134.609728178 podStartE2EDuration="2m14.609728178s" podCreationTimestamp="2026-02-21 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:12.595005719 +0000 UTC m=+154.606572664" watchObservedRunningTime="2026-02-21 00:09:12.609728178 +0000 UTC m=+154.621295113" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.627904 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ttg62" podStartSLOduration=133.627875907 podStartE2EDuration="2m13.627875907s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:12.620871925 +0000 UTC m=+154.632438860" watchObservedRunningTime="2026-02-21 00:09:12.627875907 +0000 UTC m=+154.639442842" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.634087 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.634505 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:13.13448703 +0000 UTC m=+155.146053955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.735542 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.736237 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:13.236224539 +0000 UTC m=+155.247791474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.759064 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kv84q" event={"ID":"1d455554-2028-496c-a29a-94930265fec5","Type":"ContainerStarted","Data":"58f5979c55294dbc120179aea652e0d9f69e2288ff49de8a3f378f58bfc8b483"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.761108 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hltcd" event={"ID":"b712f1fb-8e1e-43df-8cec-27a727d1fa4f","Type":"ContainerStarted","Data":"14df4bdab861b441413b6007be2d409ede948ca30eabf4ea6b1efeed0fdbc33e"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.777757 4730 generic.go:334] "Generic (PLEG): container finished" podID="eb679c27-8952-42f2-914e-13e4a969c408" containerID="f7f54713e892198efee374236fd0b458f92c50117b3fa473ef80f4be50bca7d5" exitCode=0 Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.777880 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6p499" event={"ID":"eb679c27-8952-42f2-914e-13e4a969c408","Type":"ContainerDied","Data":"f7f54713e892198efee374236fd0b458f92c50117b3fa473ef80f4be50bca7d5"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.793189 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-p7z7k" event={"ID":"ba763110-213a-49ec-9385-e723b6a02fc8","Type":"ContainerStarted","Data":"f90f42750feab5db480590674d1cdf5e0b23cf2ce2aa093043ae8662a049225d"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.793805 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-p7z7k" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.797153 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-p7z7k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.797218 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p7z7k" podUID="ba763110-213a-49ec-9385-e723b6a02fc8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.812236 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm" event={"ID":"113599a7-fd8d-4bcb-9e1a-ce992776990a","Type":"ContainerStarted","Data":"cb6d25294afb21e34ca06d520d46674fb740ba50bf1e682217a7935342114fb6"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.814610 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" event={"ID":"d6be412a-8d96-40ac-967b-96ad68c91c1a","Type":"ContainerStarted","Data":"ecb3f0ec94cad50a1e96d1f4f75e014c4c3700ebcdefb4b948afe7aab0eb61b6"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.822690 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" event={"ID":"c4f9d16f-28a1-4b63-93bf-bfb8ba1bbaa0","Type":"ContainerStarted","Data":"722d2d4269421debfec452f2a9c9cd655059403c3bd2d87dd62c34c2dd6c9e60"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.826209 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" event={"ID":"780f4657-32ba-4755-b1ca-76fbb94ed7b8","Type":"ContainerStarted","Data":"4e154c37b8a6421f67547b2931ceb57c87e7c970fda7d4c86e2290527ce9c6c9"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.831865 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" event={"ID":"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef","Type":"ContainerStarted","Data":"f13adca90e62bb54a97302742d6a1fc3c44e433de69f98259b120ff611d921c3"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.832891 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.838218 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.838594 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:13.338576162 +0000 UTC m=+155.350143097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.838714 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" event={"ID":"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0","Type":"ContainerStarted","Data":"359b21437ce472749b1c63fe5399b87293f140b1d22bc1a60a37160eb0761826"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.847156 4730 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-b4h9p container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.847214 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" podUID="f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.849431 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" event={"ID":"8f3fd734-80ae-4995-938e-6aa33d08265c","Type":"ContainerStarted","Data":"35b782272c46aec70d7de945b7ee5b87182298d4831881fa1e92d4f720536d00"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.867387 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l28vp" event={"ID":"9eb7868f-81d4-4a88-99bb-807599f57b97","Type":"ContainerStarted","Data":"5f8bd9de6c885ce8e0ad2c42a3eb6a6acda08cfa736a74cc46c57947da581d31"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.876544 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kmpmr" event={"ID":"b612b123-b5f2-40db-a3f9-70ca031cd3f5","Type":"ContainerStarted","Data":"bb146726936799d1bb4ffe58e7e96df884e71ea822d082a828d669ccbad0f6d1"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.884517 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" event={"ID":"9b4eb008-ae9f-48aa-82b2-48ef108daddd","Type":"ContainerStarted","Data":"8e37da7e43442f9bc1d3573596baf179b076343b41f0b1896cbecfe323dc324c"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.887025 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" event={"ID":"cf6bdbe0-9c57-4476-99f4-e837b5277f1a","Type":"ContainerStarted","Data":"dae1c75f4b7c25553a4f9f26ba7048f07206a2f113ee3ddd3f82b58a0dfc8f8f"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.887457 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.899105 4730 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-gcll8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.899161 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" podUID="cf6bdbe0-9c57-4476-99f4-e837b5277f1a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.936769 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" event={"ID":"6df18100-1b19-4805-80c6-eb2b939c1248","Type":"ContainerStarted","Data":"f6c9e7191222b82aab1ce58764a4ef89733743b32ac739177fa118221fb3707a"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.944444 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:12 crc kubenswrapper[4730]: E0221 00:09:12.946183 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:13.446144095 +0000 UTC m=+155.457711030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.955527 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" event={"ID":"afba91ef-7949-490e-9903-0751d7f84d27","Type":"ContainerStarted","Data":"2dc940bbd4c4486f1da4e68fcc126f846fc36b9fa3923dfae8241f3b4134e8db"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.956035 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.958595 4730 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pt6nb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.958695 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" podUID="afba91ef-7949-490e-9903-0751d7f84d27" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.960651 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz" event={"ID":"3334b1ca-87b2-436d-a994-15634b8240e2","Type":"ContainerStarted","Data":"67f7970b9f81b685c6d191182d877d504bd5c74fc6193fe8fd61a61573957986"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.975353 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" event={"ID":"b3ca121a-9e67-478c-bbe4-484e36eb185f","Type":"ContainerStarted","Data":"806c73108392468f63a0b834b91f7a8828678be78b870c4e4613b486ae3b36db"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.980510 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" event={"ID":"495e727f-2c66-441e-ae90-7e3dcf5e79ce","Type":"ContainerStarted","Data":"9471b03853dbdd0bad2487c4c6801498cc4f08a470c101546d2c96f53327e4b6"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.982389 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" event={"ID":"719ba80e-56f2-49ca-96ed-7f53a9159916","Type":"ContainerStarted","Data":"16a20a7ba4c1e4a1f9afc942b24e2aabf633f87c0e9e412797b6c987b1cc9120"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.982420 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" event={"ID":"719ba80e-56f2-49ca-96ed-7f53a9159916","Type":"ContainerStarted","Data":"9ea30c81589e769290fffa625d5dfbf8b26df6fea7e605822bc1157db6a4e66b"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.984479 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29527200-v2vdw" event={"ID":"bdfac2f0-2a56-43ab-b248-a9c523e41856","Type":"ContainerStarted","Data":"b7cc5726750645188b652aa4ac6fe0f39fba537efeecbab3e330cf511f369f19"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.986967 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" event={"ID":"1b301164-219c-45d4-ae7a-65efd88d3265","Type":"ContainerStarted","Data":"534070fbb38ffcfc908c5d1c8b6ab50d87bea406c7acdaf474200065b8a097f4"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.988414 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" event={"ID":"ab327ef5-0e37-4dca-b380-5a0a3e1060af","Type":"ContainerStarted","Data":"0cda666262a73c35c31b7db03c2fd980d44f3f14984d8677e846c07abe3e8621"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.990330 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-29g9v" event={"ID":"c34885d2-a418-472a-92fe-161f64359e5b","Type":"ContainerStarted","Data":"ff41f21c086b23dc3df88b61bba6be5c6dd681886059cd5a04fb258a9ee4e863"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.993509 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-87wrr" event={"ID":"f9c5a444-f0b8-4ce4-869d-009eba2536e4","Type":"ContainerStarted","Data":"ee3283789a2051a31b8a08d3441aa8b150a1d597de897776dc826ec693d6c752"} Feb 21 00:09:12 crc kubenswrapper[4730]: I0221 00:09:12.994645 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" event={"ID":"81ed0256-3be9-4994-85ee-f35c6be1bf63","Type":"ContainerStarted","Data":"054a5e12d8f9426f4838bebfaf4be3d5ee65cae13834d0cc1a0fb7733fb1ece4"} Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:12.995907 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" event={"ID":"34d07c9d-b8a5-4c5c-904b-680f85c759ec","Type":"ContainerStarted","Data":"9b1d2ec89410f6a57c97cc2388321b366ae75dec7c3a10107311fd12187785ce"} Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:12.997851 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" event={"ID":"a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501","Type":"ContainerStarted","Data":"b81124dde150da067c30b88e453d4999abf904b52a1a2f11f6f0a532a8e216f3"} Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:12.997870 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" event={"ID":"a6b8b844-5527-4ea9-a6e7-8ebd1b8cb501","Type":"ContainerStarted","Data":"22c45adbb59c4c04b0e10eb75f8636b713deffa4ee90786ed89db2475e966c51"} Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:12.999471 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" event={"ID":"49ec45fc-19c2-439f-b7cf-77fcfd764ae3","Type":"ContainerStarted","Data":"fc3aa6a8cf8dc45b6c879899a52dfb4fdfee4cc5bfae0cfbf569f694921c109f"} Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.005704 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" event={"ID":"1b8a1e85-beca-4608-b727-1ee293d094b2","Type":"ContainerStarted","Data":"1ba07f172c40ba5e026f729cf6925bbb64bcf68d25883208e64adba7238a0c0a"} Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.011672 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8thm5" event={"ID":"0573adb9-e1f6-4518-a7ce-9ad70cf17705","Type":"ContainerStarted","Data":"dc5d7d768e221f52e76af3275a6af030deb86f3074d807c7dcdd387669e69af4"} Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.012282 4730 patch_prober.go:28] interesting pod/console-operator-58897d9998-sqqlg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.012331 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sqqlg" podUID="c4651f24-ec65-46fe-875a-7bf52568a045" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.012844 4730 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c2cjf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" start-of-body= Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.012878 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" podUID="75d6599f-a7ce-4d05-a452-ebcb1a1fea65" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.045431 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:13 crc kubenswrapper[4730]: E0221 00:09:13.046500 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:13.546485391 +0000 UTC m=+155.558052326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.093552 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29527200-v2vdw" podStartSLOduration=135.093529207 podStartE2EDuration="2m15.093529207s" podCreationTimestamp="2026-02-21 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.091146562 +0000 UTC m=+155.102713497" watchObservedRunningTime="2026-02-21 00:09:13.093529207 +0000 UTC m=+155.105096142" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.172984 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.174753 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxmlj" podStartSLOduration=134.174739202 podStartE2EDuration="2m14.174739202s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.138093066 +0000 UTC m=+155.149660011" watchObservedRunningTime="2026-02-21 00:09:13.174739202 +0000 UTC m=+155.186306137" Feb 21 00:09:13 crc kubenswrapper[4730]: E0221 00:09:13.179093 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:13.679075262 +0000 UTC m=+155.690642267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.179640 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" podStartSLOduration=134.179627164 podStartE2EDuration="2m14.179627164s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.175064519 +0000 UTC m=+155.186631454" watchObservedRunningTime="2026-02-21 00:09:13.179627164 +0000 UTC m=+155.191194129" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.223342 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-546xm" podStartSLOduration=135.223325004 podStartE2EDuration="2m15.223325004s" podCreationTimestamp="2026-02-21 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.220410147 +0000 UTC m=+155.231977072" watchObservedRunningTime="2026-02-21 00:09:13.223325004 +0000 UTC m=+155.234891959" Feb 21 00:09:13 crc kubenswrapper[4730]: E0221 00:09:13.275353 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:13.775333384 +0000 UTC m=+155.786900319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.275260 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.275850 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:13 crc kubenswrapper[4730]: E0221 00:09:13.276242 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:13.776229994 +0000 UTC m=+155.787796919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.299144 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9sjwc" podStartSLOduration=134.299128434 podStartE2EDuration="2m14.299128434s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.297373303 +0000 UTC m=+155.308940238" watchObservedRunningTime="2026-02-21 00:09:13.299128434 +0000 UTC m=+155.310695369" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.300366 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7hdv5" podStartSLOduration=134.300357362 podStartE2EDuration="2m14.300357362s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.258288601 +0000 UTC m=+155.269855536" watchObservedRunningTime="2026-02-21 00:09:13.300357362 +0000 UTC m=+155.311924307" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.338148 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq7m7" podStartSLOduration=135.338132414 podStartE2EDuration="2m15.338132414s" podCreationTimestamp="2026-02-21 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.331050141 +0000 UTC m=+155.342617086" watchObservedRunningTime="2026-02-21 00:09:13.338132414 +0000 UTC m=+155.349699349" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.385462 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2jp56" podStartSLOduration=134.385441345 podStartE2EDuration="2m14.385441345s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.383247855 +0000 UTC m=+155.394814790" watchObservedRunningTime="2026-02-21 00:09:13.385441345 +0000 UTC m=+155.397008290" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.387108 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:13 crc kubenswrapper[4730]: E0221 00:09:13.387514 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:13.887498373 +0000 UTC m=+155.899065308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.417590 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" podStartSLOduration=134.417568297 podStartE2EDuration="2m14.417568297s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.417312442 +0000 UTC m=+155.428879387" watchObservedRunningTime="2026-02-21 00:09:13.417568297 +0000 UTC m=+155.429135242" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.435403 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:13 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:13 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:13 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.435453 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.449742 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" podStartSLOduration=134.44972728 podStartE2EDuration="2m14.44972728s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.449447353 +0000 UTC m=+155.461014288" watchObservedRunningTime="2026-02-21 00:09:13.44972728 +0000 UTC m=+155.461294205" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.490046 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:13 crc kubenswrapper[4730]: E0221 00:09:13.490509 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:13.990494201 +0000 UTC m=+156.002061136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.550382 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6sws4" podStartSLOduration=134.550361713 podStartE2EDuration="2m14.550361713s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.547840315 +0000 UTC m=+155.559407250" watchObservedRunningTime="2026-02-21 00:09:13.550361713 +0000 UTC m=+155.561928648" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.591111 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:13 crc kubenswrapper[4730]: E0221 00:09:13.591257 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:14.091230246 +0000 UTC m=+156.102797181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.591497 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:13 crc kubenswrapper[4730]: E0221 00:09:13.592212 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:14.092197179 +0000 UTC m=+156.103764114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.632702 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4mngz" podStartSLOduration=134.632683374 podStartE2EDuration="2m14.632683374s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.632593512 +0000 UTC m=+155.644160447" watchObservedRunningTime="2026-02-21 00:09:13.632683374 +0000 UTC m=+155.644250299" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.663857 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8thm5" podStartSLOduration=6.663835922 podStartE2EDuration="6.663835922s" podCreationTimestamp="2026-02-21 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.647797503 +0000 UTC m=+155.659364438" watchObservedRunningTime="2026-02-21 00:09:13.663835922 +0000 UTC m=+155.675402877" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.692006 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:13 crc kubenswrapper[4730]: E0221 00:09:13.692290 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:14.1922735 +0000 UTC m=+156.203840435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.698342 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5mzr9" podStartSLOduration=135.698327069 podStartE2EDuration="2m15.698327069s" podCreationTimestamp="2026-02-21 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.697200903 +0000 UTC m=+155.708767838" watchObservedRunningTime="2026-02-21 00:09:13.698327069 +0000 UTC m=+155.709894004" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.737348 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-p7z7k" podStartSLOduration=134.73732939 podStartE2EDuration="2m14.73732939s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.734928484 +0000 UTC m=+155.746495419" watchObservedRunningTime="2026-02-21 00:09:13.73732939 +0000 UTC m=+155.748896325" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.774854 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pvqbb" podStartSLOduration=134.774836316 podStartE2EDuration="2m14.774836316s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:13.77333267 +0000 UTC m=+155.784899605" watchObservedRunningTime="2026-02-21 00:09:13.774836316 +0000 UTC m=+155.786403251" Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.793134 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:13 crc kubenswrapper[4730]: E0221 00:09:13.793466 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:14.293454265 +0000 UTC m=+156.305021200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.894217 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:13 crc kubenswrapper[4730]: E0221 00:09:13.894415 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:14.394389395 +0000 UTC m=+156.405956330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.894574 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:13 crc kubenswrapper[4730]: E0221 00:09:13.895107 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:14.395088651 +0000 UTC m=+156.406655596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:13 crc kubenswrapper[4730]: I0221 00:09:13.995712 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:13 crc kubenswrapper[4730]: E0221 00:09:13.996040 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:14.496019932 +0000 UTC m=+156.507586867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.025100 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l28vp" event={"ID":"9eb7868f-81d4-4a88-99bb-807599f57b97","Type":"ContainerStarted","Data":"9d588161502690b345cc2544ef1e16a7dee40777b2b1f2a98ea120765cbc65e0"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.036931 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" event={"ID":"1b301164-219c-45d4-ae7a-65efd88d3265","Type":"ContainerStarted","Data":"f1faadca4a319c132ca0c6daca258f15836224429dae1d567cf8904407b697e5"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.041035 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" event={"ID":"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0","Type":"ContainerStarted","Data":"9a3b45c4bfbc6c33724357ffdfdae36bf615e73df6bc494fb216547b0b468111"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.041061 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" event={"ID":"b765c1d8-ee90-4dd5-8f01-3a4f38366ee0","Type":"ContainerStarted","Data":"df17e8b27e52839144871831ca5963bf0526dd8346cc9452f02cb9e709787f41"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.048161 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l28vp" podStartSLOduration=135.048117174 podStartE2EDuration="2m15.048117174s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.046340092 +0000 UTC m=+156.057907037" watchObservedRunningTime="2026-02-21 00:09:14.048117174 +0000 UTC m=+156.059684109" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.058417 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" event={"ID":"108cc2af-75e9-47f2-a82e-03a59074bef8","Type":"ContainerStarted","Data":"e16f68c237264a5b20095dab2fb27e4ba45d1e3978e2bda0ab6dd852cd3e75e9"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.074184 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zb52h" podStartSLOduration=135.074166225 podStartE2EDuration="2m15.074166225s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.071402231 +0000 UTC m=+156.082969176" watchObservedRunningTime="2026-02-21 00:09:14.074166225 +0000 UTC m=+156.085733160" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.079622 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6p499" event={"ID":"eb679c27-8952-42f2-914e-13e4a969c408","Type":"ContainerStarted","Data":"1b89dcda670fd0440d55be1f76d0893f01c2f1e48737c68e790a7a6ba3680ac5"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.086407 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-29g9v" event={"ID":"c34885d2-a418-472a-92fe-161f64359e5b","Type":"ContainerStarted","Data":"e43e0d5e995a348ea68905a1fd6ddaae0fd575c2e54ff94fc7fd256ad3830997"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.086431 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kqglz" podStartSLOduration=135.086418608 podStartE2EDuration="2m15.086418608s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.085672701 +0000 UTC m=+156.097239656" watchObservedRunningTime="2026-02-21 00:09:14.086418608 +0000 UTC m=+156.097985553" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.097419 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:14 crc kubenswrapper[4730]: E0221 00:09:14.102534 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:14.60250477 +0000 UTC m=+156.614071705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.143161 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" event={"ID":"719ba80e-56f2-49ca-96ed-7f53a9159916","Type":"ContainerStarted","Data":"8ce12efb1f0bc1a8145fc4ba156154ddbfcdaab648af94170ef22faebb815826"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.148428 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" event={"ID":"20f37963-2f14-4031-ba6a-5a2b2908ed09","Type":"ContainerStarted","Data":"32307982b03b86d6137213379883d802ac66cdde204cb6e2006ab4a9940cef62"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.159236 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-29g9v" podStartSLOduration=135.159223439 podStartE2EDuration="2m15.159223439s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.117652109 +0000 UTC m=+156.129219044" watchObservedRunningTime="2026-02-21 00:09:14.159223439 +0000 UTC m=+156.170790374" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.174512 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kv84q" event={"ID":"1d455554-2028-496c-a29a-94930265fec5","Type":"ContainerStarted","Data":"8b8eb90a0d4935ecf16a8d762421c68a4bc836f0acde61476aae5aaaa16b7de6"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.188668 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-866vd" podStartSLOduration=135.188646238 podStartE2EDuration="2m15.188646238s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.161214975 +0000 UTC m=+156.172781910" watchObservedRunningTime="2026-02-21 00:09:14.188646238 +0000 UTC m=+156.200213173" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.191077 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" podStartSLOduration=135.191059964 podStartE2EDuration="2m15.191059964s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.190107382 +0000 UTC m=+156.201674307" watchObservedRunningTime="2026-02-21 00:09:14.191059964 +0000 UTC m=+156.202626909" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.195374 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" event={"ID":"81ed0256-3be9-4994-85ee-f35c6be1bf63","Type":"ContainerStarted","Data":"ffb3fe490fc50e79a97f7b59d596f7f120b7b6c985af06729724891c4e6caeec"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.198425 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:14 crc kubenswrapper[4730]: E0221 00:09:14.199917 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:14.699902408 +0000 UTC m=+156.711469343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.201225 4730 csr.go:261] certificate signing request csr-twf6v is approved, waiting to be issued Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.213146 4730 csr.go:257] certificate signing request csr-twf6v is issued Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.223918 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" event={"ID":"f9958c43-f3a7-4b80-ab5f-4a32607b9fd6","Type":"ContainerStarted","Data":"f713ed09ed9222e79e4e407f83a4f5111d430f08dab34b15031e243da6408ca6"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.230246 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-z7gds" podStartSLOduration=135.230228258 podStartE2EDuration="2m15.230228258s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.228322564 +0000 UTC m=+156.239889499" watchObservedRunningTime="2026-02-21 00:09:14.230228258 +0000 UTC m=+156.241795193" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.241702 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hltcd" event={"ID":"b712f1fb-8e1e-43df-8cec-27a727d1fa4f","Type":"ContainerStarted","Data":"ec86dc9f65ffde3c4e84f8eb8c9034287f22f3fb9579f83b16e1d4ab0147f007"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.253195 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8thm5" event={"ID":"0573adb9-e1f6-4518-a7ce-9ad70cf17705","Type":"ContainerStarted","Data":"a9d00cc7c9da80856889e31b6770083e60ebafa1e2d0432a23e7dd47ef88b54e"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.263628 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-95lrd" podStartSLOduration=135.263616369 podStartE2EDuration="2m15.263616369s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.263280101 +0000 UTC m=+156.274847046" watchObservedRunningTime="2026-02-21 00:09:14.263616369 +0000 UTC m=+156.275183304" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.276691 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" event={"ID":"780f4657-32ba-4755-b1ca-76fbb94ed7b8","Type":"ContainerStarted","Data":"6243e1fcef30a9702ba786ffe76e91231078ddabcaa7eb2c52f03f9b5180dd70"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.276931 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" event={"ID":"780f4657-32ba-4755-b1ca-76fbb94ed7b8","Type":"ContainerStarted","Data":"3d58c3722790b38a4159e1eb981ce1e5e4b2a3440471331bec399d71156a25c6"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.277766 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.285087 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-hltcd" podStartSLOduration=135.285068364 podStartE2EDuration="2m15.285068364s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.284400119 +0000 UTC m=+156.295967044" watchObservedRunningTime="2026-02-21 00:09:14.285068364 +0000 UTC m=+156.296635299" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.293481 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kmpmr" event={"ID":"b612b123-b5f2-40db-a3f9-70ca031cd3f5","Type":"ContainerStarted","Data":"c3c7392908413a4830a5c08a2952ee41d574917afffaa2f4367c1ecafaa9415d"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.300738 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:14 crc kubenswrapper[4730]: E0221 00:09:14.303271 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:14.803255714 +0000 UTC m=+156.814822649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.309560 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" podStartSLOduration=135.309542589 podStartE2EDuration="2m15.309542589s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.308227459 +0000 UTC m=+156.319794404" watchObservedRunningTime="2026-02-21 00:09:14.309542589 +0000 UTC m=+156.321109524" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.313359 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" event={"ID":"9b4eb008-ae9f-48aa-82b2-48ef108daddd","Type":"ContainerStarted","Data":"31c92a9fd0765968e0597073d525bef1224e54acc1731bbdfa42964137fdae68"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.329425 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" event={"ID":"8f3fd734-80ae-4995-938e-6aa33d08265c","Type":"ContainerStarted","Data":"462f6db5a0e0221f828a70f6b441289fb404d7f273b66aa87eb25cfa7cdc0e99"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.330022 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.332452 4730 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vdnbn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.332566 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" podUID="8f3fd734-80ae-4995-938e-6aa33d08265c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.341740 4730 generic.go:334] "Generic (PLEG): container finished" podID="d6be412a-8d96-40ac-967b-96ad68c91c1a" containerID="ecb3f0ec94cad50a1e96d1f4f75e014c4c3700ebcdefb4b948afe7aab0eb61b6" exitCode=0 Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.341981 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" event={"ID":"d6be412a-8d96-40ac-967b-96ad68c91c1a","Type":"ContainerDied","Data":"ecb3f0ec94cad50a1e96d1f4f75e014c4c3700ebcdefb4b948afe7aab0eb61b6"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.343125 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kmpmr" podStartSLOduration=7.343105903 podStartE2EDuration="7.343105903s" podCreationTimestamp="2026-02-21 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.323686356 +0000 UTC m=+156.335253291" watchObservedRunningTime="2026-02-21 00:09:14.343105903 +0000 UTC m=+156.354672838" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.345932 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" event={"ID":"6df18100-1b19-4805-80c6-eb2b939c1248","Type":"ContainerStarted","Data":"bbb05df6997d71ac5ee52b3ffebea77405b259436b6834e88085b7502bbc786f"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.347037 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.351157 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" event={"ID":"ae1bb125-3f1d-44e7-9d6b-b3e5739acdbd","Type":"ContainerStarted","Data":"9d22866ccfb11eb8ad306d742fba6a76196e2fcbe9e0d0a6a62e8576201d5aa2"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.354532 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" event={"ID":"495e727f-2c66-441e-ae90-7e3dcf5e79ce","Type":"ContainerStarted","Data":"549134d9fa3be809ea1e3ac2eb320e110acd16544e8ad15509665c8e78803dd9"} Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.354585 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.356137 4730 patch_prober.go:28] interesting pod/console-operator-58897d9998-sqqlg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.356193 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sqqlg" podUID="c4651f24-ec65-46fe-875a-7bf52568a045" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.357263 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-p7z7k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.357305 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p7z7k" podUID="ba763110-213a-49ec-9385-e723b6a02fc8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.357356 4730 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-b4h9p container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.357374 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" podUID="f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.359577 4730 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mpkt5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.359616 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" podUID="6df18100-1b19-4805-80c6-eb2b939c1248" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.371169 4730 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pt6nb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.371230 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" podUID="afba91ef-7949-490e-9903-0751d7f84d27" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.372990 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" podStartSLOduration=135.372975504 podStartE2EDuration="2m15.372975504s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.37242749 +0000 UTC m=+156.383994425" watchObservedRunningTime="2026-02-21 00:09:14.372975504 +0000 UTC m=+156.384542439" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.377722 4730 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-g4j9p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.377781 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" podUID="495e727f-2c66-441e-ae90-7e3dcf5e79ce" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.381670 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" podStartSLOduration=135.381650614 podStartE2EDuration="2m15.381650614s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.343498603 +0000 UTC m=+156.355065538" watchObservedRunningTime="2026-02-21 00:09:14.381650614 +0000 UTC m=+156.393217549" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.403768 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:14 crc kubenswrapper[4730]: E0221 00:09:14.403848 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:14.903829906 +0000 UTC m=+156.915396841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.404188 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" podStartSLOduration=135.404177474 podStartE2EDuration="2m15.404177474s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.401762898 +0000 UTC m=+156.413329833" watchObservedRunningTime="2026-02-21 00:09:14.404177474 +0000 UTC m=+156.415744409" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.406029 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:14 crc kubenswrapper[4730]: E0221 00:09:14.407737 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:14.907724136 +0000 UTC m=+156.919291161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.426409 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:14 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:14 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:14 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.426460 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.433126 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" podStartSLOduration=135.433111872 podStartE2EDuration="2m15.433111872s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.426171092 +0000 UTC m=+156.437738027" watchObservedRunningTime="2026-02-21 00:09:14.433111872 +0000 UTC m=+156.444678807" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.508919 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-692cl" podStartSLOduration=135.508900051 podStartE2EDuration="2m15.508900051s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:14.463183116 +0000 UTC m=+156.474750051" watchObservedRunningTime="2026-02-21 00:09:14.508900051 +0000 UTC m=+156.520467036" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.525839 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:14 crc kubenswrapper[4730]: E0221 00:09:14.526179 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.0261639 +0000 UTC m=+157.037730835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.627813 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:14 crc kubenswrapper[4730]: E0221 00:09:14.628422 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.12840615 +0000 UTC m=+157.139973085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.729078 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:14 crc kubenswrapper[4730]: E0221 00:09:14.729475 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.229444592 +0000 UTC m=+157.241011527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.794014 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.830365 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:14 crc kubenswrapper[4730]: E0221 00:09:14.830639 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.330628009 +0000 UTC m=+157.342194944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.931520 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:14 crc kubenswrapper[4730]: E0221 00:09:14.931704 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.431680021 +0000 UTC m=+157.443246956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.931765 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:14 crc kubenswrapper[4730]: E0221 00:09:14.932219 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.432205733 +0000 UTC m=+157.443772668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.952522 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.952575 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.954571 4730 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-pzhn2 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.954665 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" podUID="20f37963-2f14-4031-ba6a-5a2b2908ed09" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 21 00:09:14 crc kubenswrapper[4730]: I0221 00:09:14.985616 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.032695 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.032919 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.532886337 +0000 UTC m=+157.544453272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.032997 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.033288 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.533275796 +0000 UTC m=+157.544842731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.134414 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.134606 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.634577085 +0000 UTC m=+157.646144020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.134682 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.135033 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.635020635 +0000 UTC m=+157.646587570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.214591 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-21 00:04:14 +0000 UTC, rotation deadline is 2026-12-30 14:20:24.793337509 +0000 UTC Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.214627 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7502h11m9.578713386s for next certificate rotation Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.235322 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.235509 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.735479154 +0000 UTC m=+157.747046099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.235616 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.236003 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.735981466 +0000 UTC m=+157.747548401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.336769 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.336974 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.836936196 +0000 UTC m=+157.848503131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.337034 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.337304 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.837292944 +0000 UTC m=+157.848859879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.362864 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" event={"ID":"d6be412a-8d96-40ac-967b-96ad68c91c1a","Type":"ContainerStarted","Data":"d4b0b967a30fc912a9d1beb26664f2ea79ff5c349f48289d2f28a7f5717b614a"} Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.362992 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.365395 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6p499" event={"ID":"eb679c27-8952-42f2-914e-13e4a969c408","Type":"ContainerStarted","Data":"4d412485ac8e17e74f493cb3cc185f31fade576e58722cb2791159e92bb4bf2e"} Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.366698 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-87wrr" event={"ID":"f9c5a444-f0b8-4ce4-869d-009eba2536e4","Type":"ContainerStarted","Data":"f155d290d41b7f81af31382d0f96c4fe1f22e937936144aafb922fb0af1018f4"} Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.369772 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kv84q" event={"ID":"1d455554-2028-496c-a29a-94930265fec5","Type":"ContainerStarted","Data":"f809ca06acd324a72a8c3236eb048259246d42aea1e30766ed4ec04ea61c8943"} Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.373273 4730 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-g4j9p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.373310 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" podUID="495e727f-2c66-441e-ae90-7e3dcf5e79ce" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.376041 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-p7z7k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.376067 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p7z7k" podUID="ba763110-213a-49ec-9385-e723b6a02fc8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.376070 4730 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vdnbn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.376109 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" podUID="8f3fd734-80ae-4995-938e-6aa33d08265c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.376120 4730 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mpkt5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.376135 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" podUID="6df18100-1b19-4805-80c6-eb2b939c1248" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.376188 4730 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-b4h9p container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.376238 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" podUID="f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.399576 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" podStartSLOduration=136.399557772 podStartE2EDuration="2m16.399557772s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:15.39641117 +0000 UTC m=+157.407978105" watchObservedRunningTime="2026-02-21 00:09:15.399557772 +0000 UTC m=+157.411124707" Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.423358 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:15 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:15 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:15 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.423438 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.437492 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.438769 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:15.938738017 +0000 UTC m=+157.950304962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.530073 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fzk5w" podStartSLOduration=136.530055874 podStartE2EDuration="2m16.530055874s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:15.463113789 +0000 UTC m=+157.474680734" watchObservedRunningTime="2026-02-21 00:09:15.530055874 +0000 UTC m=+157.541622809" Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.531436 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-kv84q" podStartSLOduration=8.531430436 podStartE2EDuration="8.531430436s" podCreationTimestamp="2026-02-21 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:15.52940534 +0000 UTC m=+157.540972275" watchObservedRunningTime="2026-02-21 00:09:15.531430436 +0000 UTC m=+157.542997371" Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.539553 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.539907 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:16.039891241 +0000 UTC m=+158.051458176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.640843 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.641171 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:16.14115642 +0000 UTC m=+158.152723355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.641198 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.641477 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:16.141470607 +0000 UTC m=+158.153037542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.742562 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.742703 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:16.242683713 +0000 UTC m=+158.254250648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.742807 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.743140 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:16.243123123 +0000 UTC m=+158.254690058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.843522 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.844115 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:16.344051433 +0000 UTC m=+158.355618368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:15 crc kubenswrapper[4730]: I0221 00:09:15.945202 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:15 crc kubenswrapper[4730]: E0221 00:09:15.945637 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:16.445598878 +0000 UTC m=+158.457172373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.046718 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:16 crc kubenswrapper[4730]: E0221 00:09:16.046901 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:16.546876135 +0000 UTC m=+158.558443070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.047637 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:16 crc kubenswrapper[4730]: E0221 00:09:16.047956 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:16.54793613 +0000 UTC m=+158.559503065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.148742 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:16 crc kubenswrapper[4730]: E0221 00:09:16.149067 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:16.649051474 +0000 UTC m=+158.660618409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.250273 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:16 crc kubenswrapper[4730]: E0221 00:09:16.250601 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:16.750589578 +0000 UTC m=+158.762156513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.351726 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:16 crc kubenswrapper[4730]: E0221 00:09:16.352008 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:16.851994029 +0000 UTC m=+158.863560964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.377031 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-kv84q" Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.377447 4730 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mpkt5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.377476 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" podUID="6df18100-1b19-4805-80c6-eb2b939c1248" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.426980 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:16 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:16 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:16 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.427464 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.452847 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:16 crc kubenswrapper[4730]: E0221 00:09:16.455399 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:16.955388076 +0000 UTC m=+158.966955011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.553956 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:16 crc kubenswrapper[4730]: E0221 00:09:16.554095 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:17.054066214 +0000 UTC m=+159.065633149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.554235 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:16 crc kubenswrapper[4730]: E0221 00:09:16.554535 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:17.054527515 +0000 UTC m=+159.066094440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.655777 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:16 crc kubenswrapper[4730]: E0221 00:09:16.655974 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:17.155936795 +0000 UTC m=+159.167503730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.656364 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:16 crc kubenswrapper[4730]: E0221 00:09:16.656708 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:17.156698093 +0000 UTC m=+159.168265028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.757860 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:16 crc kubenswrapper[4730]: E0221 00:09:16.758197 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:17.258182876 +0000 UTC m=+159.269749811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.859532 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:16 crc kubenswrapper[4730]: E0221 00:09:16.859814 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:17.359801832 +0000 UTC m=+159.371368767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.960771 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:16 crc kubenswrapper[4730]: E0221 00:09:16.961156 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:17.461139981 +0000 UTC m=+159.472706916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:16 crc kubenswrapper[4730]: I0221 00:09:16.994367 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.062484 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:17 crc kubenswrapper[4730]: E0221 00:09:17.064058 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:17.564045437 +0000 UTC m=+159.575612372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.098046 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-6p499" podStartSLOduration=139.098031691 podStartE2EDuration="2m19.098031691s" podCreationTimestamp="2026-02-21 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:15.556080506 +0000 UTC m=+157.567647441" watchObservedRunningTime="2026-02-21 00:09:17.098031691 +0000 UTC m=+159.109598626" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.163503 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:17 crc kubenswrapper[4730]: E0221 00:09:17.163751 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:17.663736728 +0000 UTC m=+159.675303663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.264387 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:17 crc kubenswrapper[4730]: E0221 00:09:17.264676 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:17.764662948 +0000 UTC m=+159.776229883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.365821 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:17 crc kubenswrapper[4730]: E0221 00:09:17.366207 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:17.866190612 +0000 UTC m=+159.877757547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.423484 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:17 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:17 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:17 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.423535 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.467318 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:17 crc kubenswrapper[4730]: E0221 00:09:17.467593 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:17.967579112 +0000 UTC m=+159.979146047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.508712 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zr5rw"] Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.509652 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.517335 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.528922 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zr5rw"] Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.568463 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.568608 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406888dc-7d00-47e2-8c63-05e0106525e1-utilities\") pod \"certified-operators-zr5rw\" (UID: \"406888dc-7d00-47e2-8c63-05e0106525e1\") " pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.568671 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406888dc-7d00-47e2-8c63-05e0106525e1-catalog-content\") pod \"certified-operators-zr5rw\" (UID: \"406888dc-7d00-47e2-8c63-05e0106525e1\") " pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.568704 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx6kw\" (UniqueName: \"kubernetes.io/projected/406888dc-7d00-47e2-8c63-05e0106525e1-kube-api-access-fx6kw\") pod \"certified-operators-zr5rw\" (UID: \"406888dc-7d00-47e2-8c63-05e0106525e1\") " pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:09:17 crc kubenswrapper[4730]: E0221 00:09:17.568788 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:18.068774448 +0000 UTC m=+160.080341373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.670139 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx6kw\" (UniqueName: \"kubernetes.io/projected/406888dc-7d00-47e2-8c63-05e0106525e1-kube-api-access-fx6kw\") pod \"certified-operators-zr5rw\" (UID: \"406888dc-7d00-47e2-8c63-05e0106525e1\") " pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.671016 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406888dc-7d00-47e2-8c63-05e0106525e1-utilities\") pod \"certified-operators-zr5rw\" (UID: \"406888dc-7d00-47e2-8c63-05e0106525e1\") " pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.671162 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.671346 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406888dc-7d00-47e2-8c63-05e0106525e1-catalog-content\") pod \"certified-operators-zr5rw\" (UID: \"406888dc-7d00-47e2-8c63-05e0106525e1\") " pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.671434 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406888dc-7d00-47e2-8c63-05e0106525e1-utilities\") pod \"certified-operators-zr5rw\" (UID: \"406888dc-7d00-47e2-8c63-05e0106525e1\") " pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:09:17 crc kubenswrapper[4730]: E0221 00:09:17.671614 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:18.171593832 +0000 UTC m=+160.183160827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.671741 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406888dc-7d00-47e2-8c63-05e0106525e1-catalog-content\") pod \"certified-operators-zr5rw\" (UID: \"406888dc-7d00-47e2-8c63-05e0106525e1\") " pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.698494 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-95vbk"] Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.699583 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.703238 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.717825 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx6kw\" (UniqueName: \"kubernetes.io/projected/406888dc-7d00-47e2-8c63-05e0106525e1-kube-api-access-fx6kw\") pod \"certified-operators-zr5rw\" (UID: \"406888dc-7d00-47e2-8c63-05e0106525e1\") " pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.719396 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-95vbk"] Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.772386 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.772701 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jkxt\" (UniqueName: \"kubernetes.io/projected/6298cfe2-11f4-453f-9cfc-63aceb67b191-kube-api-access-8jkxt\") pod \"community-operators-95vbk\" (UID: \"6298cfe2-11f4-453f-9cfc-63aceb67b191\") " pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.772756 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6298cfe2-11f4-453f-9cfc-63aceb67b191-utilities\") pod \"community-operators-95vbk\" (UID: \"6298cfe2-11f4-453f-9cfc-63aceb67b191\") " pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.772796 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6298cfe2-11f4-453f-9cfc-63aceb67b191-catalog-content\") pod \"community-operators-95vbk\" (UID: \"6298cfe2-11f4-453f-9cfc-63aceb67b191\") " pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:09:17 crc kubenswrapper[4730]: E0221 00:09:17.772895 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:18.27287976 +0000 UTC m=+160.284446695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.825738 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.874177 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.874232 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jkxt\" (UniqueName: \"kubernetes.io/projected/6298cfe2-11f4-453f-9cfc-63aceb67b191-kube-api-access-8jkxt\") pod \"community-operators-95vbk\" (UID: \"6298cfe2-11f4-453f-9cfc-63aceb67b191\") " pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.874266 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6298cfe2-11f4-453f-9cfc-63aceb67b191-utilities\") pod \"community-operators-95vbk\" (UID: \"6298cfe2-11f4-453f-9cfc-63aceb67b191\") " pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.874295 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6298cfe2-11f4-453f-9cfc-63aceb67b191-catalog-content\") pod \"community-operators-95vbk\" (UID: \"6298cfe2-11f4-453f-9cfc-63aceb67b191\") " pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.874682 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6298cfe2-11f4-453f-9cfc-63aceb67b191-catalog-content\") pod \"community-operators-95vbk\" (UID: \"6298cfe2-11f4-453f-9cfc-63aceb67b191\") " pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.874893 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6298cfe2-11f4-453f-9cfc-63aceb67b191-utilities\") pod \"community-operators-95vbk\" (UID: \"6298cfe2-11f4-453f-9cfc-63aceb67b191\") " pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:09:17 crc kubenswrapper[4730]: E0221 00:09:17.875375 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:18.375357476 +0000 UTC m=+160.386924501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.908919 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jkxt\" (UniqueName: \"kubernetes.io/projected/6298cfe2-11f4-453f-9cfc-63aceb67b191-kube-api-access-8jkxt\") pod \"community-operators-95vbk\" (UID: \"6298cfe2-11f4-453f-9cfc-63aceb67b191\") " pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.915167 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kjwnr"] Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.916018 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjwnr"] Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.916092 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.975770 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.975990 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782ae357-da25-45f3-a274-048aa2ffdbf0-catalog-content\") pod \"certified-operators-kjwnr\" (UID: \"782ae357-da25-45f3-a274-048aa2ffdbf0\") " pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.976025 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb96x\" (UniqueName: \"kubernetes.io/projected/782ae357-da25-45f3-a274-048aa2ffdbf0-kube-api-access-nb96x\") pod \"certified-operators-kjwnr\" (UID: \"782ae357-da25-45f3-a274-048aa2ffdbf0\") " pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:09:17 crc kubenswrapper[4730]: I0221 00:09:17.976126 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782ae357-da25-45f3-a274-048aa2ffdbf0-utilities\") pod \"certified-operators-kjwnr\" (UID: \"782ae357-da25-45f3-a274-048aa2ffdbf0\") " pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:09:17 crc kubenswrapper[4730]: E0221 00:09:17.976230 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:18.476212574 +0000 UTC m=+160.487779509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.009357 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.010003 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.021843 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.022124 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.028192 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.032221 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.082672 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782ae357-da25-45f3-a274-048aa2ffdbf0-catalog-content\") pod \"certified-operators-kjwnr\" (UID: \"782ae357-da25-45f3-a274-048aa2ffdbf0\") " pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.082720 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb96x\" (UniqueName: \"kubernetes.io/projected/782ae357-da25-45f3-a274-048aa2ffdbf0-kube-api-access-nb96x\") pod \"certified-operators-kjwnr\" (UID: \"782ae357-da25-45f3-a274-048aa2ffdbf0\") " pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.082759 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04840b8c-dc72-455f-b41d-1914a1024ed8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"04840b8c-dc72-455f-b41d-1914a1024ed8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.082812 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.082856 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04840b8c-dc72-455f-b41d-1914a1024ed8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"04840b8c-dc72-455f-b41d-1914a1024ed8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.082889 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782ae357-da25-45f3-a274-048aa2ffdbf0-utilities\") pod \"certified-operators-kjwnr\" (UID: \"782ae357-da25-45f3-a274-048aa2ffdbf0\") " pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.083695 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782ae357-da25-45f3-a274-048aa2ffdbf0-utilities\") pod \"certified-operators-kjwnr\" (UID: \"782ae357-da25-45f3-a274-048aa2ffdbf0\") " pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.083979 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782ae357-da25-45f3-a274-048aa2ffdbf0-catalog-content\") pod \"certified-operators-kjwnr\" (UID: \"782ae357-da25-45f3-a274-048aa2ffdbf0\") " pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:09:18 crc kubenswrapper[4730]: E0221 00:09:18.084247 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:18.584232708 +0000 UTC m=+160.595799733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.126693 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sgmxz"] Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.127595 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.157623 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sgmxz"] Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.171588 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb96x\" (UniqueName: \"kubernetes.io/projected/782ae357-da25-45f3-a274-048aa2ffdbf0-kube-api-access-nb96x\") pod \"certified-operators-kjwnr\" (UID: \"782ae357-da25-45f3-a274-048aa2ffdbf0\") " pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.186489 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.186680 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04840b8c-dc72-455f-b41d-1914a1024ed8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"04840b8c-dc72-455f-b41d-1914a1024ed8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.186734 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db4a3c0-933e-4bac-b03c-46e8631ab467-catalog-content\") pod \"community-operators-sgmxz\" (UID: \"6db4a3c0-933e-4bac-b03c-46e8631ab467\") " pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.186805 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04840b8c-dc72-455f-b41d-1914a1024ed8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"04840b8c-dc72-455f-b41d-1914a1024ed8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.186863 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8gw9\" (UniqueName: \"kubernetes.io/projected/6db4a3c0-933e-4bac-b03c-46e8631ab467-kube-api-access-g8gw9\") pod \"community-operators-sgmxz\" (UID: \"6db4a3c0-933e-4bac-b03c-46e8631ab467\") " pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.186893 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db4a3c0-933e-4bac-b03c-46e8631ab467-utilities\") pod \"community-operators-sgmxz\" (UID: \"6db4a3c0-933e-4bac-b03c-46e8631ab467\") " pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:18 crc kubenswrapper[4730]: E0221 00:09:18.187516 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:18.687496412 +0000 UTC m=+160.699063347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.187565 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04840b8c-dc72-455f-b41d-1914a1024ed8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"04840b8c-dc72-455f-b41d-1914a1024ed8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.235367 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04840b8c-dc72-455f-b41d-1914a1024ed8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"04840b8c-dc72-455f-b41d-1914a1024ed8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.288693 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8gw9\" (UniqueName: \"kubernetes.io/projected/6db4a3c0-933e-4bac-b03c-46e8631ab467-kube-api-access-g8gw9\") pod \"community-operators-sgmxz\" (UID: \"6db4a3c0-933e-4bac-b03c-46e8631ab467\") " pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.289083 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db4a3c0-933e-4bac-b03c-46e8631ab467-utilities\") pod \"community-operators-sgmxz\" (UID: \"6db4a3c0-933e-4bac-b03c-46e8631ab467\") " pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.289122 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db4a3c0-933e-4bac-b03c-46e8631ab467-catalog-content\") pod \"community-operators-sgmxz\" (UID: \"6db4a3c0-933e-4bac-b03c-46e8631ab467\") " pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.289159 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:18 crc kubenswrapper[4730]: E0221 00:09:18.289450 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:18.789438305 +0000 UTC m=+160.801005240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.289774 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db4a3c0-933e-4bac-b03c-46e8631ab467-utilities\") pod \"community-operators-sgmxz\" (UID: \"6db4a3c0-933e-4bac-b03c-46e8631ab467\") " pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.289851 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db4a3c0-933e-4bac-b03c-46e8631ab467-catalog-content\") pod \"community-operators-sgmxz\" (UID: \"6db4a3c0-933e-4bac-b03c-46e8631ab467\") " pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.299189 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.335351 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8gw9\" (UniqueName: \"kubernetes.io/projected/6db4a3c0-933e-4bac-b03c-46e8631ab467-kube-api-access-g8gw9\") pod \"community-operators-sgmxz\" (UID: \"6db4a3c0-933e-4bac-b03c-46e8631ab467\") " pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.365894 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.390366 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:18 crc kubenswrapper[4730]: E0221 00:09:18.390874 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:18.890852216 +0000 UTC m=+160.902419161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.406624 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zr5rw"] Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.429688 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:18 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:18 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:18 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.429745 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.491600 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:18 crc kubenswrapper[4730]: E0221 00:09:18.491863 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:18.991852358 +0000 UTC m=+161.003419293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.504259 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.592248 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:18 crc kubenswrapper[4730]: E0221 00:09:18.593069 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:19.093052974 +0000 UTC m=+161.104619909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.685554 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-95vbk"] Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.693978 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:18 crc kubenswrapper[4730]: E0221 00:09:18.694314 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:19.194300361 +0000 UTC m=+161.205867296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:18 crc kubenswrapper[4730]: W0221 00:09:18.720633 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6298cfe2_11f4_453f_9cfc_63aceb67b191.slice/crio-83b8769bb988fb24d5f075c33a5733f5141ae57f9c61254ada6b154d5a1b5119 WatchSource:0}: Error finding container 83b8769bb988fb24d5f075c33a5733f5141ae57f9c61254ada6b154d5a1b5119: Status 404 returned error can't find the container with id 83b8769bb988fb24d5f075c33a5733f5141ae57f9c61254ada6b154d5a1b5119 Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.797664 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:18 crc kubenswrapper[4730]: E0221 00:09:18.798109 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:19.298094327 +0000 UTC m=+161.309661262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.899930 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:18 crc kubenswrapper[4730]: E0221 00:09:18.900228 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:19.400216505 +0000 UTC m=+161.411783440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.902645 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjwnr"] Feb 21 00:09:18 crc kubenswrapper[4730]: W0221 00:09:18.917968 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod782ae357_da25_45f3_a274_048aa2ffdbf0.slice/crio-a78def995ce6ff2e3a981e0cbd55bf071065a1788295e4e72cdff8d0b498479b WatchSource:0}: Error finding container a78def995ce6ff2e3a981e0cbd55bf071065a1788295e4e72cdff8d0b498479b: Status 404 returned error can't find the container with id a78def995ce6ff2e3a981e0cbd55bf071065a1788295e4e72cdff8d0b498479b Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.944189 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 21 00:09:18 crc kubenswrapper[4730]: I0221 00:09:18.983747 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sgmxz"] Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.000647 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:19 crc kubenswrapper[4730]: E0221 00:09:19.000921 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:19.500904849 +0000 UTC m=+161.512471804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:19 crc kubenswrapper[4730]: W0221 00:09:19.034721 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6db4a3c0_933e_4bac_b03c_46e8631ab467.slice/crio-f5fbca68fc553b9e1650d63e20346a01836e5ce7189bb1d205001aca5bc03c79 WatchSource:0}: Error finding container f5fbca68fc553b9e1650d63e20346a01836e5ce7189bb1d205001aca5bc03c79: Status 404 returned error can't find the container with id f5fbca68fc553b9e1650d63e20346a01836e5ce7189bb1d205001aca5bc03c79 Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.106145 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:19 crc kubenswrapper[4730]: E0221 00:09:19.106528 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:19.606512188 +0000 UTC m=+161.618079123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.207092 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:19 crc kubenswrapper[4730]: E0221 00:09:19.207522 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:19.707507769 +0000 UTC m=+161.719074704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.278077 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zz7r" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.308898 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:19 crc kubenswrapper[4730]: E0221 00:09:19.309375 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:19.80936234 +0000 UTC m=+161.820929275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.410508 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:19 crc kubenswrapper[4730]: E0221 00:09:19.410672 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:19.910647329 +0000 UTC m=+161.922214264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.410805 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:19 crc kubenswrapper[4730]: E0221 00:09:19.411123 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:19.911113099 +0000 UTC m=+161.922680034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.423019 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:19 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:19 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:19 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.423349 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.423751 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgmxz" event={"ID":"6db4a3c0-933e-4bac-b03c-46e8631ab467","Type":"ContainerStarted","Data":"f5fbca68fc553b9e1650d63e20346a01836e5ce7189bb1d205001aca5bc03c79"} Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.425024 4730 generic.go:334] "Generic (PLEG): container finished" podID="6298cfe2-11f4-453f-9cfc-63aceb67b191" containerID="dec5202c5fd49b5eb8418fab74c6abb17b90bcfc5d9d20a6baa3257fec22fca0" exitCode=0 Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.425083 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95vbk" event={"ID":"6298cfe2-11f4-453f-9cfc-63aceb67b191","Type":"ContainerDied","Data":"dec5202c5fd49b5eb8418fab74c6abb17b90bcfc5d9d20a6baa3257fec22fca0"} Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.425108 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95vbk" event={"ID":"6298cfe2-11f4-453f-9cfc-63aceb67b191","Type":"ContainerStarted","Data":"83b8769bb988fb24d5f075c33a5733f5141ae57f9c61254ada6b154d5a1b5119"} Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.426428 4730 generic.go:334] "Generic (PLEG): container finished" podID="406888dc-7d00-47e2-8c63-05e0106525e1" containerID="fc0436bf3370a77ae3703fb5bf0f2a2af335e5cc15b8c6cef755e99763f6b71b" exitCode=0 Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.426476 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr5rw" event={"ID":"406888dc-7d00-47e2-8c63-05e0106525e1","Type":"ContainerDied","Data":"fc0436bf3370a77ae3703fb5bf0f2a2af335e5cc15b8c6cef755e99763f6b71b"} Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.426536 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr5rw" event={"ID":"406888dc-7d00-47e2-8c63-05e0106525e1","Type":"ContainerStarted","Data":"4e99e843e2b726988a5fedfa42b2b525ccd0cd44742d921accf72977afc7181e"} Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.426595 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.427296 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"04840b8c-dc72-455f-b41d-1914a1024ed8","Type":"ContainerStarted","Data":"b68e12e5bd1f8c7daa6748b9bf15fcb0c942bf7c1e47a5382ccce25eb0327745"} Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.429973 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjwnr" event={"ID":"782ae357-da25-45f3-a274-048aa2ffdbf0","Type":"ContainerStarted","Data":"66ca9990ec8b62a73f29ca30a78f9f8790d3fe11d18d6d493374e5172663c54c"} Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.430012 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjwnr" event={"ID":"782ae357-da25-45f3-a274-048aa2ffdbf0","Type":"ContainerStarted","Data":"a78def995ce6ff2e3a981e0cbd55bf071065a1788295e4e72cdff8d0b498479b"} Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.431762 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-87wrr" event={"ID":"f9c5a444-f0b8-4ce4-869d-009eba2536e4","Type":"ContainerStarted","Data":"85de00a01f7627ce36300432ff443a5611ccfd6518a9353accb497e17bbe8066"} Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.511504 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:19 crc kubenswrapper[4730]: E0221 00:09:19.511715 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:20.011669081 +0000 UTC m=+162.023236016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.511818 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:19 crc kubenswrapper[4730]: E0221 00:09:19.512168 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:20.012156281 +0000 UTC m=+162.023723216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.612757 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:19 crc kubenswrapper[4730]: E0221 00:09:19.613109 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:20.1130364 +0000 UTC m=+162.124603345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.693010 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pmq2r"] Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.693995 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.696244 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.709894 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmq2r"] Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.713778 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-catalog-content\") pod \"redhat-marketplace-pmq2r\" (UID: \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\") " pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.713827 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-utilities\") pod \"redhat-marketplace-pmq2r\" (UID: \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\") " pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.713879 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctvcq\" (UniqueName: \"kubernetes.io/projected/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-kube-api-access-ctvcq\") pod \"redhat-marketplace-pmq2r\" (UID: \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\") " pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.714051 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:19 crc kubenswrapper[4730]: E0221 00:09:19.714399 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:20.21438344 +0000 UTC m=+162.225950475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.814978 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.815170 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-catalog-content\") pod \"redhat-marketplace-pmq2r\" (UID: \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\") " pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.815207 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-utilities\") pod \"redhat-marketplace-pmq2r\" (UID: \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\") " pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.815239 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctvcq\" (UniqueName: \"kubernetes.io/projected/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-kube-api-access-ctvcq\") pod \"redhat-marketplace-pmq2r\" (UID: \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\") " pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:09:19 crc kubenswrapper[4730]: E0221 00:09:19.815575 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:20.315559966 +0000 UTC m=+162.327126901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.815898 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-catalog-content\") pod \"redhat-marketplace-pmq2r\" (UID: \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\") " pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.816148 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-utilities\") pod \"redhat-marketplace-pmq2r\" (UID: \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\") " pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.847005 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctvcq\" (UniqueName: \"kubernetes.io/projected/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-kube-api-access-ctvcq\") pod \"redhat-marketplace-pmq2r\" (UID: \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\") " pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.916624 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:19 crc kubenswrapper[4730]: E0221 00:09:19.916890 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:20.416879014 +0000 UTC m=+162.428445949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.957896 4730 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.961937 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.970129 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pzhn2" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.999044 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:19 crc kubenswrapper[4730]: I0221 00:09:19.999413 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.004483 4730 patch_prober.go:28] interesting pod/console-f9d7485db-ttg62 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.004530 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ttg62" podUID="5b7d4c04-0a49-410b-a682-b58b6b97a987" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.017256 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:20 crc kubenswrapper[4730]: E0221 00:09:20.017514 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:20.517486677 +0000 UTC m=+162.529053622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.017835 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:20 crc kubenswrapper[4730]: E0221 00:09:20.020679 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:20.52066088 +0000 UTC m=+162.532227825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.067509 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.085168 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sqqlg" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.098958 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6bcmp"] Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.099963 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.114297 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bcmp"] Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.119898 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:20 crc kubenswrapper[4730]: E0221 00:09:20.120149 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:20.620116156 +0000 UTC m=+162.631683101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.120461 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-utilities\") pod \"redhat-marketplace-6bcmp\" (UID: \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\") " pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.120543 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.120571 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-catalog-content\") pod \"redhat-marketplace-6bcmp\" (UID: \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\") " pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.120665 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdgdq\" (UniqueName: \"kubernetes.io/projected/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-kube-api-access-wdgdq\") pod \"redhat-marketplace-6bcmp\" (UID: \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\") " pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:09:20 crc kubenswrapper[4730]: E0221 00:09:20.121506 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:20.621489698 +0000 UTC m=+162.633056733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.188694 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.188737 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.198221 4730 patch_prober.go:28] interesting pod/apiserver-76f77b778f-6p499 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 21 00:09:20 crc kubenswrapper[4730]: [+]log ok Feb 21 00:09:20 crc kubenswrapper[4730]: [+]etcd ok Feb 21 00:09:20 crc kubenswrapper[4730]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 21 00:09:20 crc kubenswrapper[4730]: [+]poststarthook/generic-apiserver-start-informers ok Feb 21 00:09:20 crc kubenswrapper[4730]: [+]poststarthook/max-in-flight-filter ok Feb 21 00:09:20 crc kubenswrapper[4730]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 21 00:09:20 crc kubenswrapper[4730]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 21 00:09:20 crc kubenswrapper[4730]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 21 00:09:20 crc kubenswrapper[4730]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 21 00:09:20 crc kubenswrapper[4730]: [+]poststarthook/project.openshift.io-projectcache ok Feb 21 00:09:20 crc kubenswrapper[4730]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 21 00:09:20 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-startinformers ok Feb 21 00:09:20 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 21 00:09:20 crc kubenswrapper[4730]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 21 00:09:20 crc kubenswrapper[4730]: livez check failed Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.198261 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-6p499" podUID="eb679c27-8952-42f2-914e-13e4a969c408" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.222482 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.222669 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-utilities\") pod \"redhat-marketplace-6bcmp\" (UID: \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\") " pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.222741 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-catalog-content\") pod \"redhat-marketplace-6bcmp\" (UID: \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\") " pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.222791 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdgdq\" (UniqueName: \"kubernetes.io/projected/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-kube-api-access-wdgdq\") pod \"redhat-marketplace-6bcmp\" (UID: \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\") " pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.223863 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-catalog-content\") pod \"redhat-marketplace-6bcmp\" (UID: \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\") " pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:09:20 crc kubenswrapper[4730]: E0221 00:09:20.224469 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:20.724446615 +0000 UTC m=+162.736013550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.225045 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-utilities\") pod \"redhat-marketplace-6bcmp\" (UID: \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\") " pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.235785 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.247847 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdgdq\" (UniqueName: \"kubernetes.io/projected/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-kube-api-access-wdgdq\") pod \"redhat-marketplace-6bcmp\" (UID: \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\") " pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.290669 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-p7z7k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.290724 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p7z7k" podUID="ba763110-213a-49ec-9385-e723b6a02fc8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.291197 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-p7z7k container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.291214 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-p7z7k" podUID="ba763110-213a-49ec-9385-e723b6a02fc8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.323805 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:20 crc kubenswrapper[4730]: E0221 00:09:20.324321 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:20.82430801 +0000 UTC m=+162.835874945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.408659 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmq2r"] Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.418875 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.423257 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:20 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:20 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:20 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.423307 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.424718 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:20 crc kubenswrapper[4730]: E0221 00:09:20.425816 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:20.925795513 +0000 UTC m=+162.937362448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.439783 4730 generic.go:334] "Generic (PLEG): container finished" podID="6db4a3c0-933e-4bac-b03c-46e8631ab467" containerID="78a09d4f840e3a929f56b8121ee0145bd33214c297e8d2cc9016296611628f3f" exitCode=0 Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.439866 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgmxz" event={"ID":"6db4a3c0-933e-4bac-b03c-46e8631ab467","Type":"ContainerDied","Data":"78a09d4f840e3a929f56b8121ee0145bd33214c297e8d2cc9016296611628f3f"} Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.440975 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmq2r" event={"ID":"c5b0a1f8-4598-4fb8-982c-91a3e6699c33","Type":"ContainerStarted","Data":"f4ccc8e5f2ca01e888d2c7010ff9932e0141b2c74b6d1ad360b00d1b8d5e0eed"} Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.443305 4730 generic.go:334] "Generic (PLEG): container finished" podID="04840b8c-dc72-455f-b41d-1914a1024ed8" containerID="73a4d3512c276223b48c2546912a5afcad193acf5e47946c7d24990753114ea9" exitCode=0 Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.443376 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"04840b8c-dc72-455f-b41d-1914a1024ed8","Type":"ContainerDied","Data":"73a4d3512c276223b48c2546912a5afcad193acf5e47946c7d24990753114ea9"} Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.445147 4730 generic.go:334] "Generic (PLEG): container finished" podID="782ae357-da25-45f3-a274-048aa2ffdbf0" containerID="66ca9990ec8b62a73f29ca30a78f9f8790d3fe11d18d6d493374e5172663c54c" exitCode=0 Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.445193 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjwnr" event={"ID":"782ae357-da25-45f3-a274-048aa2ffdbf0","Type":"ContainerDied","Data":"66ca9990ec8b62a73f29ca30a78f9f8790d3fe11d18d6d493374e5172663c54c"} Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.450318 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-87wrr" event={"ID":"f9c5a444-f0b8-4ce4-869d-009eba2536e4","Type":"ContainerStarted","Data":"14a4538ff24c8069cf57bfd30d7b8028039b44b186529fc248b0cf681b84d9d8"} Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.450426 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-87wrr" event={"ID":"f9c5a444-f0b8-4ce4-869d-009eba2536e4","Type":"ContainerStarted","Data":"b9fbba4a1f279bd0c33432c9e7a3648be874f7fb9d0ba41e5a7682b6965c1ce0"} Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.460583 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.485613 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-87wrr" podStartSLOduration=13.485596933 podStartE2EDuration="13.485596933s" podCreationTimestamp="2026-02-21 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:20.483617328 +0000 UTC m=+162.495184263" watchObservedRunningTime="2026-02-21 00:09:20.485596933 +0000 UTC m=+162.497163868" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.527020 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:20 crc kubenswrapper[4730]: E0221 00:09:20.527601 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:21.027590033 +0000 UTC m=+163.039156968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.627798 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:20 crc kubenswrapper[4730]: E0221 00:09:20.628206 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:21.128149184 +0000 UTC m=+163.139716139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.641155 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bcmp"] Feb 21 00:09:20 crc kubenswrapper[4730]: W0221 00:09:20.655691 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f9e70a_3fc3_4d58_ab8f_d9fcb86f246e.slice/crio-42d1f7a96b506460535038c73adfbdf016f348e88fbe546d09ebfeb0b60f3fa7 WatchSource:0}: Error finding container 42d1f7a96b506460535038c73adfbdf016f348e88fbe546d09ebfeb0b60f3fa7: Status 404 returned error can't find the container with id 42d1f7a96b506460535038c73adfbdf016f348e88fbe546d09ebfeb0b60f3fa7 Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.707868 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ffq6b"] Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.709369 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ffq6b"] Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.709610 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.712986 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.729674 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2vrh\" (UniqueName: \"kubernetes.io/projected/3ed145f4-5a44-4d9c-8287-a9273b31559a-kube-api-access-s2vrh\") pod \"redhat-operators-ffq6b\" (UID: \"3ed145f4-5a44-4d9c-8287-a9273b31559a\") " pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.729737 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.729804 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ed145f4-5a44-4d9c-8287-a9273b31559a-catalog-content\") pod \"redhat-operators-ffq6b\" (UID: \"3ed145f4-5a44-4d9c-8287-a9273b31559a\") " pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.729837 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ed145f4-5a44-4d9c-8287-a9273b31559a-utilities\") pod \"redhat-operators-ffq6b\" (UID: \"3ed145f4-5a44-4d9c-8287-a9273b31559a\") " pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:09:20 crc kubenswrapper[4730]: E0221 00:09:20.730343 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:21.230326923 +0000 UTC m=+163.241893858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.831061 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:20 crc kubenswrapper[4730]: E0221 00:09:20.831350 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 00:09:21.331298734 +0000 UTC m=+163.342865679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.831641 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ed145f4-5a44-4d9c-8287-a9273b31559a-catalog-content\") pod \"redhat-operators-ffq6b\" (UID: \"3ed145f4-5a44-4d9c-8287-a9273b31559a\") " pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.831677 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ed145f4-5a44-4d9c-8287-a9273b31559a-utilities\") pod \"redhat-operators-ffq6b\" (UID: \"3ed145f4-5a44-4d9c-8287-a9273b31559a\") " pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.832146 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2vrh\" (UniqueName: \"kubernetes.io/projected/3ed145f4-5a44-4d9c-8287-a9273b31559a-kube-api-access-s2vrh\") pod \"redhat-operators-ffq6b\" (UID: \"3ed145f4-5a44-4d9c-8287-a9273b31559a\") " pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.832205 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.832369 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ed145f4-5a44-4d9c-8287-a9273b31559a-catalog-content\") pod \"redhat-operators-ffq6b\" (UID: \"3ed145f4-5a44-4d9c-8287-a9273b31559a\") " pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:09:20 crc kubenswrapper[4730]: E0221 00:09:20.832534 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 00:09:21.332519962 +0000 UTC m=+163.344086897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rp9n4" (UID: "8deb8736-a138-49f3-9550-060511014aaf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.835989 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ed145f4-5a44-4d9c-8287-a9273b31559a-utilities\") pod \"redhat-operators-ffq6b\" (UID: \"3ed145f4-5a44-4d9c-8287-a9273b31559a\") " pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.842993 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.874097 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2vrh\" (UniqueName: \"kubernetes.io/projected/3ed145f4-5a44-4d9c-8287-a9273b31559a-kube-api-access-s2vrh\") pod \"redhat-operators-ffq6b\" (UID: \"3ed145f4-5a44-4d9c-8287-a9273b31559a\") " pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.879109 4730 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-21T00:09:19.957922843Z","Handler":null,"Name":""} Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.882253 4730 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.882291 4730 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.933249 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 00:09:20 crc kubenswrapper[4730]: I0221 00:09:20.939741 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.034577 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.037449 4730 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.037489 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.070203 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.104017 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rjcl9"] Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.105012 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.119152 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.128495 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjcl9"] Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.133668 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mpkt5" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.136100 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vdnbn" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.150377 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rp9n4\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.187114 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.241760 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b8d7fe-e435-40cd-80fd-3595610bca8f-catalog-content\") pod \"redhat-operators-rjcl9\" (UID: \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\") " pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.241851 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9pdb\" (UniqueName: \"kubernetes.io/projected/e8b8d7fe-e435-40cd-80fd-3595610bca8f-kube-api-access-t9pdb\") pod \"redhat-operators-rjcl9\" (UID: \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\") " pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.241976 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b8d7fe-e435-40cd-80fd-3595610bca8f-utilities\") pod \"redhat-operators-rjcl9\" (UID: \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\") " pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.343081 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b8d7fe-e435-40cd-80fd-3595610bca8f-catalog-content\") pod \"redhat-operators-rjcl9\" (UID: \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\") " pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.343394 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9pdb\" (UniqueName: \"kubernetes.io/projected/e8b8d7fe-e435-40cd-80fd-3595610bca8f-kube-api-access-t9pdb\") pod \"redhat-operators-rjcl9\" (UID: \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\") " pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.343637 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b8d7fe-e435-40cd-80fd-3595610bca8f-catalog-content\") pod \"redhat-operators-rjcl9\" (UID: \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\") " pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.343656 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b8d7fe-e435-40cd-80fd-3595610bca8f-utilities\") pod \"redhat-operators-rjcl9\" (UID: \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\") " pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.343752 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.344294 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b8d7fe-e435-40cd-80fd-3595610bca8f-utilities\") pod \"redhat-operators-rjcl9\" (UID: \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\") " pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.363061 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9pdb\" (UniqueName: \"kubernetes.io/projected/e8b8d7fe-e435-40cd-80fd-3595610bca8f-kube-api-access-t9pdb\") pod \"redhat-operators-rjcl9\" (UID: \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\") " pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.408665 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcf7c949-7646-4b97-9ffa-bf019455ed07-metrics-certs\") pod \"network-metrics-daemon-snhft\" (UID: \"bcf7c949-7646-4b97-9ffa-bf019455ed07\") " pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.423187 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:21 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:21 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:21 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.423245 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.449243 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-snhft" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.469624 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.483574 4730 generic.go:334] "Generic (PLEG): container finished" podID="b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" containerID="9085fb2d924ff2c45d6f207422fa7984e98414e67a76762e77ef8cd787c127bf" exitCode=0 Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.483662 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bcmp" event={"ID":"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e","Type":"ContainerDied","Data":"9085fb2d924ff2c45d6f207422fa7984e98414e67a76762e77ef8cd787c127bf"} Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.483687 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bcmp" event={"ID":"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e","Type":"ContainerStarted","Data":"42d1f7a96b506460535038c73adfbdf016f348e88fbe546d09ebfeb0b60f3fa7"} Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.487621 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ffq6b"] Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.513213 4730 generic.go:334] "Generic (PLEG): container finished" podID="c5b0a1f8-4598-4fb8-982c-91a3e6699c33" containerID="d498d5d6cbedab1e6781b94f1cf262e6b61e29a4be258c507c4b859981579965" exitCode=0 Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.514339 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmq2r" event={"ID":"c5b0a1f8-4598-4fb8-982c-91a3e6699c33","Type":"ContainerDied","Data":"d498d5d6cbedab1e6781b94f1cf262e6b61e29a4be258c507c4b859981579965"} Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.575641 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rp9n4"] Feb 21 00:09:21 crc kubenswrapper[4730]: W0221 00:09:21.646051 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8deb8736_a138_49f3_9550_060511014aaf.slice/crio-eaa01728f7e5579c4439ba10063f44dc1a603425599780b786e836262c2a4bc2 WatchSource:0}: Error finding container eaa01728f7e5579c4439ba10063f44dc1a603425599780b786e836262c2a4bc2: Status 404 returned error can't find the container with id eaa01728f7e5579c4439ba10063f44dc1a603425599780b786e836262c2a4bc2 Feb 21 00:09:21 crc kubenswrapper[4730]: I0221 00:09:21.937142 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.054932 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04840b8c-dc72-455f-b41d-1914a1024ed8-kube-api-access\") pod \"04840b8c-dc72-455f-b41d-1914a1024ed8\" (UID: \"04840b8c-dc72-455f-b41d-1914a1024ed8\") " Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.055064 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04840b8c-dc72-455f-b41d-1914a1024ed8-kubelet-dir\") pod \"04840b8c-dc72-455f-b41d-1914a1024ed8\" (UID: \"04840b8c-dc72-455f-b41d-1914a1024ed8\") " Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.055396 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04840b8c-dc72-455f-b41d-1914a1024ed8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "04840b8c-dc72-455f-b41d-1914a1024ed8" (UID: "04840b8c-dc72-455f-b41d-1914a1024ed8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.060690 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04840b8c-dc72-455f-b41d-1914a1024ed8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "04840b8c-dc72-455f-b41d-1914a1024ed8" (UID: "04840b8c-dc72-455f-b41d-1914a1024ed8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.064390 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-snhft"] Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.157638 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04840b8c-dc72-455f-b41d-1914a1024ed8-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.157682 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04840b8c-dc72-455f-b41d-1914a1024ed8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.187147 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjcl9"] Feb 21 00:09:22 crc kubenswrapper[4730]: W0221 00:09:22.278679 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8b8d7fe_e435_40cd_80fd_3595610bca8f.slice/crio-ad78a3a6206c063a00aca183f785c867f2b99057e6331ffe84d15a48a390b991 WatchSource:0}: Error finding container ad78a3a6206c063a00aca183f785c867f2b99057e6331ffe84d15a48a390b991: Status 404 returned error can't find the container with id ad78a3a6206c063a00aca183f785c867f2b99057e6331ffe84d15a48a390b991 Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.425111 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:22 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:22 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:22 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.425188 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.541679 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" event={"ID":"8deb8736-a138-49f3-9550-060511014aaf","Type":"ContainerStarted","Data":"1f8395a31b7ef7093b091b6a8a820d9d7779db91b38d74f6a6aa673692e64801"} Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.541726 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" event={"ID":"8deb8736-a138-49f3-9550-060511014aaf","Type":"ContainerStarted","Data":"eaa01728f7e5579c4439ba10063f44dc1a603425599780b786e836262c2a4bc2"} Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.541771 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.544183 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-snhft" event={"ID":"bcf7c949-7646-4b97-9ffa-bf019455ed07","Type":"ContainerStarted","Data":"bfbea79323a8be282e7a6dc0e2ce5da852c351c4bf52a32621a6b673e0e390db"} Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.548093 4730 generic.go:334] "Generic (PLEG): container finished" podID="9b4eb008-ae9f-48aa-82b2-48ef108daddd" containerID="31c92a9fd0765968e0597073d525bef1224e54acc1731bbdfa42964137fdae68" exitCode=0 Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.548164 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" event={"ID":"9b4eb008-ae9f-48aa-82b2-48ef108daddd","Type":"ContainerDied","Data":"31c92a9fd0765968e0597073d525bef1224e54acc1731bbdfa42964137fdae68"} Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.555252 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjcl9" event={"ID":"e8b8d7fe-e435-40cd-80fd-3595610bca8f","Type":"ContainerStarted","Data":"ad78a3a6206c063a00aca183f785c867f2b99057e6331ffe84d15a48a390b991"} Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.563385 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" podStartSLOduration=143.563362759 podStartE2EDuration="2m23.563362759s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:22.557889213 +0000 UTC m=+164.569456138" watchObservedRunningTime="2026-02-21 00:09:22.563362759 +0000 UTC m=+164.574929694" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.586631 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"04840b8c-dc72-455f-b41d-1914a1024ed8","Type":"ContainerDied","Data":"b68e12e5bd1f8c7daa6748b9bf15fcb0c942bf7c1e47a5382ccce25eb0327745"} Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.586676 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.586695 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b68e12e5bd1f8c7daa6748b9bf15fcb0c942bf7c1e47a5382ccce25eb0327745" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.589034 4730 generic.go:334] "Generic (PLEG): container finished" podID="3ed145f4-5a44-4d9c-8287-a9273b31559a" containerID="530fc43610554a192fcb1653abeef62c1d6a5b18d61c9ff27f59dc17d6ed4a47" exitCode=0 Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.589064 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffq6b" event={"ID":"3ed145f4-5a44-4d9c-8287-a9273b31559a","Type":"ContainerDied","Data":"530fc43610554a192fcb1653abeef62c1d6a5b18d61c9ff27f59dc17d6ed4a47"} Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.589098 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffq6b" event={"ID":"3ed145f4-5a44-4d9c-8287-a9273b31559a","Type":"ContainerStarted","Data":"a9266cf6ed47805fd3bba560a47807ba57a9419c9effc1fca2679a4453c7d9ea"} Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.702480 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.895521 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 21 00:09:22 crc kubenswrapper[4730]: E0221 00:09:22.895937 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04840b8c-dc72-455f-b41d-1914a1024ed8" containerName="pruner" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.896043 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="04840b8c-dc72-455f-b41d-1914a1024ed8" containerName="pruner" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.896206 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="04840b8c-dc72-455f-b41d-1914a1024ed8" containerName="pruner" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.896637 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.903842 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.904025 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.926690 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.985113 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 00:09:22 crc kubenswrapper[4730]: I0221 00:09:22.985207 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.086106 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.086324 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.086518 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.118704 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.209575 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kv84q" Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.218315 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.423842 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:23 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:23 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:23 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.423913 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.600748 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-snhft" event={"ID":"bcf7c949-7646-4b97-9ffa-bf019455ed07","Type":"ContainerStarted","Data":"e241ae8178b012e22137503aba932a3702ce5217f2f9891fe7efa77273e6bcbb"} Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.600802 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-snhft" event={"ID":"bcf7c949-7646-4b97-9ffa-bf019455ed07","Type":"ContainerStarted","Data":"65c5b0ddeb8e14ed039a5dba8f7752001ba796696f1934af20d3c61a0aaee854"} Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.614078 4730 generic.go:334] "Generic (PLEG): container finished" podID="e8b8d7fe-e435-40cd-80fd-3595610bca8f" containerID="3740cb9240695745a6b30e26891d71fab50fb0d367ca3847b4f684afe7d7b656" exitCode=0 Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.615115 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjcl9" event={"ID":"e8b8d7fe-e435-40cd-80fd-3595610bca8f","Type":"ContainerDied","Data":"3740cb9240695745a6b30e26891d71fab50fb0d367ca3847b4f684afe7d7b656"} Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.616914 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-snhft" podStartSLOduration=144.61689641 podStartE2EDuration="2m24.61689641s" podCreationTimestamp="2026-02-21 00:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:23.61606508 +0000 UTC m=+165.627632015" watchObservedRunningTime="2026-02-21 00:09:23.61689641 +0000 UTC m=+165.628463345" Feb 21 00:09:23 crc kubenswrapper[4730]: I0221 00:09:23.698177 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 21 00:09:23 crc kubenswrapper[4730]: W0221 00:09:23.734010 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4edbf2d9_80de_4de4_8bf1_a4a7b9f0a043.slice/crio-4399599dafdedf2777961a27bd9e44243a48fdeaa60882c5fb0874493c629f6c WatchSource:0}: Error finding container 4399599dafdedf2777961a27bd9e44243a48fdeaa60882c5fb0874493c629f6c: Status 404 returned error can't find the container with id 4399599dafdedf2777961a27bd9e44243a48fdeaa60882c5fb0874493c629f6c Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.322454 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.322740 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.392227 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.422326 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:24 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:24 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:24 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.422377 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.508608 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b4eb008-ae9f-48aa-82b2-48ef108daddd-secret-volume\") pod \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\" (UID: \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\") " Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.508754 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7v89\" (UniqueName: \"kubernetes.io/projected/9b4eb008-ae9f-48aa-82b2-48ef108daddd-kube-api-access-s7v89\") pod \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\" (UID: \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\") " Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.508778 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b4eb008-ae9f-48aa-82b2-48ef108daddd-config-volume\") pod \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\" (UID: \"9b4eb008-ae9f-48aa-82b2-48ef108daddd\") " Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.509614 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4eb008-ae9f-48aa-82b2-48ef108daddd-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b4eb008-ae9f-48aa-82b2-48ef108daddd" (UID: "9b4eb008-ae9f-48aa-82b2-48ef108daddd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.510123 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b4eb008-ae9f-48aa-82b2-48ef108daddd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.531831 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4eb008-ae9f-48aa-82b2-48ef108daddd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b4eb008-ae9f-48aa-82b2-48ef108daddd" (UID: "9b4eb008-ae9f-48aa-82b2-48ef108daddd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.531954 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4eb008-ae9f-48aa-82b2-48ef108daddd-kube-api-access-s7v89" (OuterVolumeSpecName: "kube-api-access-s7v89") pod "9b4eb008-ae9f-48aa-82b2-48ef108daddd" (UID: "9b4eb008-ae9f-48aa-82b2-48ef108daddd"). InnerVolumeSpecName "kube-api-access-s7v89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.612186 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b4eb008-ae9f-48aa-82b2-48ef108daddd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.612226 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7v89\" (UniqueName: \"kubernetes.io/projected/9b4eb008-ae9f-48aa-82b2-48ef108daddd-kube-api-access-s7v89\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.634074 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" event={"ID":"9b4eb008-ae9f-48aa-82b2-48ef108daddd","Type":"ContainerDied","Data":"8e37da7e43442f9bc1d3573596baf179b076343b41f0b1896cbecfe323dc324c"} Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.634112 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e37da7e43442f9bc1d3573596baf179b076343b41f0b1896cbecfe323dc324c" Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.634084 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527200-zvsz9" Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.646033 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043","Type":"ContainerStarted","Data":"703ba39caf753c1644327dc42b679eee895b939dc61bd49bef3aa987cb0e6157"} Feb 21 00:09:24 crc kubenswrapper[4730]: I0221 00:09:24.646078 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043","Type":"ContainerStarted","Data":"4399599dafdedf2777961a27bd9e44243a48fdeaa60882c5fb0874493c629f6c"} Feb 21 00:09:25 crc kubenswrapper[4730]: I0221 00:09:25.193274 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:25 crc kubenswrapper[4730]: I0221 00:09:25.197975 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-6p499" Feb 21 00:09:25 crc kubenswrapper[4730]: I0221 00:09:25.234694 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.234678286 podStartE2EDuration="3.234678286s" podCreationTimestamp="2026-02-21 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:24.667194856 +0000 UTC m=+166.678761781" watchObservedRunningTime="2026-02-21 00:09:25.234678286 +0000 UTC m=+167.246245211" Feb 21 00:09:25 crc kubenswrapper[4730]: I0221 00:09:25.423549 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:25 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:25 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:25 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:25 crc kubenswrapper[4730]: I0221 00:09:25.423908 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:25 crc kubenswrapper[4730]: I0221 00:09:25.699989 4730 generic.go:334] "Generic (PLEG): container finished" podID="4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043" containerID="703ba39caf753c1644327dc42b679eee895b939dc61bd49bef3aa987cb0e6157" exitCode=0 Feb 21 00:09:25 crc kubenswrapper[4730]: I0221 00:09:25.700101 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043","Type":"ContainerDied","Data":"703ba39caf753c1644327dc42b679eee895b939dc61bd49bef3aa987cb0e6157"} Feb 21 00:09:26 crc kubenswrapper[4730]: I0221 00:09:26.422226 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:26 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:26 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:26 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:26 crc kubenswrapper[4730]: I0221 00:09:26.422279 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:27 crc kubenswrapper[4730]: I0221 00:09:27.511748 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:27 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:27 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:27 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:27 crc kubenswrapper[4730]: I0221 00:09:27.512086 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:28 crc kubenswrapper[4730]: I0221 00:09:28.422143 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:28 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:28 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:28 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:28 crc kubenswrapper[4730]: I0221 00:09:28.422194 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:29 crc kubenswrapper[4730]: I0221 00:09:29.421263 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:29 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:29 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:29 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:29 crc kubenswrapper[4730]: I0221 00:09:29.421319 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:30 crc kubenswrapper[4730]: I0221 00:09:30.000219 4730 patch_prober.go:28] interesting pod/console-f9d7485db-ttg62 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 21 00:09:30 crc kubenswrapper[4730]: I0221 00:09:30.000373 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ttg62" podUID="5b7d4c04-0a49-410b-a682-b58b6b97a987" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 21 00:09:30 crc kubenswrapper[4730]: I0221 00:09:30.288995 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-p7z7k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 21 00:09:30 crc kubenswrapper[4730]: I0221 00:09:30.289355 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p7z7k" podUID="ba763110-213a-49ec-9385-e723b6a02fc8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 21 00:09:30 crc kubenswrapper[4730]: I0221 00:09:30.289204 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-p7z7k container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 21 00:09:30 crc kubenswrapper[4730]: I0221 00:09:30.289778 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-p7z7k" podUID="ba763110-213a-49ec-9385-e723b6a02fc8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 21 00:09:30 crc kubenswrapper[4730]: I0221 00:09:30.422208 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:30 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Feb 21 00:09:30 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:30 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:30 crc kubenswrapper[4730]: I0221 00:09:30.422274 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:31 crc kubenswrapper[4730]: I0221 00:09:31.421235 4730 patch_prober.go:28] interesting pod/router-default-5444994796-wxl66 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 00:09:31 crc kubenswrapper[4730]: [+]has-synced ok Feb 21 00:09:31 crc kubenswrapper[4730]: [+]process-running ok Feb 21 00:09:31 crc kubenswrapper[4730]: healthz check failed Feb 21 00:09:31 crc kubenswrapper[4730]: I0221 00:09:31.421349 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wxl66" podUID="dbd3805f-f503-444d-88c7-829832077376" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:09:32 crc kubenswrapper[4730]: I0221 00:09:32.422434 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:32 crc kubenswrapper[4730]: I0221 00:09:32.426862 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wxl66" Feb 21 00:09:33 crc kubenswrapper[4730]: I0221 00:09:33.754380 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 00:09:33 crc kubenswrapper[4730]: I0221 00:09:33.888526 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043-kubelet-dir\") pod \"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043\" (UID: \"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043\") " Feb 21 00:09:33 crc kubenswrapper[4730]: I0221 00:09:33.888651 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043-kube-api-access\") pod \"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043\" (UID: \"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043\") " Feb 21 00:09:33 crc kubenswrapper[4730]: I0221 00:09:33.889045 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043" (UID: "4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:09:33 crc kubenswrapper[4730]: I0221 00:09:33.900092 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043" (UID: "4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:09:33 crc kubenswrapper[4730]: I0221 00:09:33.909328 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043","Type":"ContainerDied","Data":"4399599dafdedf2777961a27bd9e44243a48fdeaa60882c5fb0874493c629f6c"} Feb 21 00:09:33 crc kubenswrapper[4730]: I0221 00:09:33.909362 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4399599dafdedf2777961a27bd9e44243a48fdeaa60882c5fb0874493c629f6c" Feb 21 00:09:33 crc kubenswrapper[4730]: I0221 00:09:33.909365 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 00:09:33 crc kubenswrapper[4730]: I0221 00:09:33.989936 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:33 crc kubenswrapper[4730]: I0221 00:09:33.989988 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:35 crc kubenswrapper[4730]: I0221 00:09:35.772335 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b4h9p"] Feb 21 00:09:35 crc kubenswrapper[4730]: I0221 00:09:35.773021 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" podUID="f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" containerName="controller-manager" containerID="cri-o://f13adca90e62bb54a97302742d6a1fc3c44e433de69f98259b120ff611d921c3" gracePeriod=30 Feb 21 00:09:35 crc kubenswrapper[4730]: I0221 00:09:35.793529 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8"] Feb 21 00:09:35 crc kubenswrapper[4730]: I0221 00:09:35.794573 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" podUID="cf6bdbe0-9c57-4476-99f4-e837b5277f1a" containerName="route-controller-manager" containerID="cri-o://dae1c75f4b7c25553a4f9f26ba7048f07206a2f113ee3ddd3f82b58a0dfc8f8f" gracePeriod=30 Feb 21 00:09:35 crc kubenswrapper[4730]: I0221 00:09:35.919896 4730 generic.go:334] "Generic (PLEG): container finished" podID="f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" containerID="f13adca90e62bb54a97302742d6a1fc3c44e433de69f98259b120ff611d921c3" exitCode=0 Feb 21 00:09:35 crc kubenswrapper[4730]: I0221 00:09:35.919951 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" event={"ID":"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef","Type":"ContainerDied","Data":"f13adca90e62bb54a97302742d6a1fc3c44e433de69f98259b120ff611d921c3"} Feb 21 00:09:36 crc kubenswrapper[4730]: I0221 00:09:36.926247 4730 generic.go:334] "Generic (PLEG): container finished" podID="cf6bdbe0-9c57-4476-99f4-e837b5277f1a" containerID="dae1c75f4b7c25553a4f9f26ba7048f07206a2f113ee3ddd3f82b58a0dfc8f8f" exitCode=0 Feb 21 00:09:36 crc kubenswrapper[4730]: I0221 00:09:36.926292 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" event={"ID":"cf6bdbe0-9c57-4476-99f4-e837b5277f1a","Type":"ContainerDied","Data":"dae1c75f4b7c25553a4f9f26ba7048f07206a2f113ee3ddd3f82b58a0dfc8f8f"} Feb 21 00:09:40 crc kubenswrapper[4730]: I0221 00:09:40.011566 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:40 crc kubenswrapper[4730]: I0221 00:09:40.018596 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ttg62" Feb 21 00:09:40 crc kubenswrapper[4730]: I0221 00:09:40.171269 4730 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-gcll8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 21 00:09:40 crc kubenswrapper[4730]: I0221 00:09:40.171569 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" podUID="cf6bdbe0-9c57-4476-99f4-e837b5277f1a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 21 00:09:40 crc kubenswrapper[4730]: I0221 00:09:40.225455 4730 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-b4h9p container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 21 00:09:40 crc kubenswrapper[4730]: I0221 00:09:40.225526 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" podUID="f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 21 00:09:40 crc kubenswrapper[4730]: I0221 00:09:40.293928 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-p7z7k" Feb 21 00:09:41 crc kubenswrapper[4730]: I0221 00:09:41.193203 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.733794 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.762023 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv"] Feb 21 00:09:43 crc kubenswrapper[4730]: E0221 00:09:43.762243 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" containerName="controller-manager" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.762256 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" containerName="controller-manager" Feb 21 00:09:43 crc kubenswrapper[4730]: E0221 00:09:43.762264 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043" containerName="pruner" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.762270 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043" containerName="pruner" Feb 21 00:09:43 crc kubenswrapper[4730]: E0221 00:09:43.762281 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4eb008-ae9f-48aa-82b2-48ef108daddd" containerName="collect-profiles" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.762287 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4eb008-ae9f-48aa-82b2-48ef108daddd" containerName="collect-profiles" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.762421 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4eb008-ae9f-48aa-82b2-48ef108daddd" containerName="collect-profiles" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.762436 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" containerName="controller-manager" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.762444 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4edbf2d9-80de-4de4-8bf1-a4a7b9f0a043" containerName="pruner" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.762807 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.772918 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv"] Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.919672 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-proxy-ca-bundles\") pod \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.919789 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-serving-cert\") pod \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.919854 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-config\") pod \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.919886 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5hbl\" (UniqueName: \"kubernetes.io/projected/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-kube-api-access-f5hbl\") pod \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.919914 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-client-ca\") pod \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\" (UID: \"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef\") " Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.920032 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88630efb-6328-414d-8847-a014256256bf-serving-cert\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.920058 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-client-ca\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.920092 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ws8n\" (UniqueName: \"kubernetes.io/projected/88630efb-6328-414d-8847-a014256256bf-kube-api-access-5ws8n\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.920151 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-proxy-ca-bundles\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.920182 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-config\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.920604 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-client-ca" (OuterVolumeSpecName: "client-ca") pod "f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" (UID: "f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.920618 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-config" (OuterVolumeSpecName: "config") pod "f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" (UID: "f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.920634 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" (UID: "f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.925239 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" (UID: "f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.934374 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-kube-api-access-f5hbl" (OuterVolumeSpecName: "kube-api-access-f5hbl") pod "f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" (UID: "f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef"). InnerVolumeSpecName "kube-api-access-f5hbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.968812 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" event={"ID":"f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef","Type":"ContainerDied","Data":"bef012223b32af3200739cbf91c1c3ad94d7f1a658951f5d099eee6f85bfe144"} Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.968864 4730 scope.go:117] "RemoveContainer" containerID="f13adca90e62bb54a97302742d6a1fc3c44e433de69f98259b120ff611d921c3" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.968882 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-b4h9p" Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.990289 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b4h9p"] Feb 21 00:09:43 crc kubenswrapper[4730]: I0221 00:09:43.992670 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b4h9p"] Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.020832 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-proxy-ca-bundles\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.020869 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-config\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.020902 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88630efb-6328-414d-8847-a014256256bf-serving-cert\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.020916 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-client-ca\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.021084 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ws8n\" (UniqueName: \"kubernetes.io/projected/88630efb-6328-414d-8847-a014256256bf-kube-api-access-5ws8n\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.021328 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.021351 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.021364 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.021376 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5hbl\" (UniqueName: \"kubernetes.io/projected/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-kube-api-access-f5hbl\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.021386 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.021823 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-client-ca\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.022135 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-proxy-ca-bundles\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.022702 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-config\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.024604 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88630efb-6328-414d-8847-a014256256bf-serving-cert\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.038514 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ws8n\" (UniqueName: \"kubernetes.io/projected/88630efb-6328-414d-8847-a014256256bf-kube-api-access-5ws8n\") pod \"controller-manager-9bbd7f8f4-fsvbv\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.077996 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:44 crc kubenswrapper[4730]: I0221 00:09:44.699696 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef" path="/var/lib/kubelet/pods/f2555e7f-09bd-4c1c-af2b-4f8e2146c2ef/volumes" Feb 21 00:09:46 crc kubenswrapper[4730]: I0221 00:09:46.793456 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 00:09:47 crc kubenswrapper[4730]: I0221 00:09:47.995176 4730 generic.go:334] "Generic (PLEG): container finished" podID="bdfac2f0-2a56-43ab-b248-a9c523e41856" containerID="b7cc5726750645188b652aa4ac6fe0f39fba537efeecbab3e330cf511f369f19" exitCode=0 Feb 21 00:09:47 crc kubenswrapper[4730]: I0221 00:09:47.995301 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29527200-v2vdw" event={"ID":"bdfac2f0-2a56-43ab-b248-a9c523e41856","Type":"ContainerDied","Data":"b7cc5726750645188b652aa4ac6fe0f39fba537efeecbab3e330cf511f369f19"} Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.172285 4730 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-gcll8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.172815 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" podUID="cf6bdbe0-9c57-4476-99f4-e837b5277f1a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.183126 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.646172 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.683719 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8"] Feb 21 00:09:51 crc kubenswrapper[4730]: E0221 00:09:51.684391 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf6bdbe0-9c57-4476-99f4-e837b5277f1a" containerName="route-controller-manager" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.684412 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf6bdbe0-9c57-4476-99f4-e837b5277f1a" containerName="route-controller-manager" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.684548 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf6bdbe0-9c57-4476-99f4-e837b5277f1a" containerName="route-controller-manager" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.684937 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:51 crc kubenswrapper[4730]: E0221 00:09:51.687625 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 21 00:09:51 crc kubenswrapper[4730]: E0221 00:09:51.687750 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb96x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kjwnr_openshift-marketplace(782ae357-da25-45f3-a274-048aa2ffdbf0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.688207 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8"] Feb 21 00:09:51 crc kubenswrapper[4730]: E0221 00:09:51.689017 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kjwnr" podUID="782ae357-da25-45f3-a274-048aa2ffdbf0" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.716704 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29527200-v2vdw" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.748909 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrhvf\" (UniqueName: \"kubernetes.io/projected/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-kube-api-access-qrhvf\") pod \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.749005 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-serving-cert\") pod \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.749041 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bdfac2f0-2a56-43ab-b248-a9c523e41856-serviceca\") pod \"bdfac2f0-2a56-43ab-b248-a9c523e41856\" (UID: \"bdfac2f0-2a56-43ab-b248-a9c523e41856\") " Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.749098 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dphl8\" (UniqueName: \"kubernetes.io/projected/bdfac2f0-2a56-43ab-b248-a9c523e41856-kube-api-access-dphl8\") pod \"bdfac2f0-2a56-43ab-b248-a9c523e41856\" (UID: \"bdfac2f0-2a56-43ab-b248-a9c523e41856\") " Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.749121 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-client-ca\") pod \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.749239 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-config\") pod \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\" (UID: \"cf6bdbe0-9c57-4476-99f4-e837b5277f1a\") " Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.750159 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdfac2f0-2a56-43ab-b248-a9c523e41856-serviceca" (OuterVolumeSpecName: "serviceca") pod "bdfac2f0-2a56-43ab-b248-a9c523e41856" (UID: "bdfac2f0-2a56-43ab-b248-a9c523e41856"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.750170 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-client-ca" (OuterVolumeSpecName: "client-ca") pod "cf6bdbe0-9c57-4476-99f4-e837b5277f1a" (UID: "cf6bdbe0-9c57-4476-99f4-e837b5277f1a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.750251 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-config" (OuterVolumeSpecName: "config") pod "cf6bdbe0-9c57-4476-99f4-e837b5277f1a" (UID: "cf6bdbe0-9c57-4476-99f4-e837b5277f1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.750401 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-serving-cert\") pod \"route-controller-manager-86dcc5664b-hhvw8\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.750436 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkvvs\" (UniqueName: \"kubernetes.io/projected/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-kube-api-access-tkvvs\") pod \"route-controller-manager-86dcc5664b-hhvw8\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.750525 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-client-ca\") pod \"route-controller-manager-86dcc5664b-hhvw8\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.750550 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-config\") pod \"route-controller-manager-86dcc5664b-hhvw8\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:51 crc kubenswrapper[4730]: E0221 00:09:51.755231 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 21 00:09:51 crc kubenswrapper[4730]: E0221 00:09:51.755413 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fx6kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zr5rw_openshift-marketplace(406888dc-7d00-47e2-8c63-05e0106525e1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 21 00:09:51 crc kubenswrapper[4730]: E0221 00:09:51.756580 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-zr5rw" podUID="406888dc-7d00-47e2-8c63-05e0106525e1" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.758628 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-kube-api-access-qrhvf" (OuterVolumeSpecName: "kube-api-access-qrhvf") pod "cf6bdbe0-9c57-4476-99f4-e837b5277f1a" (UID: "cf6bdbe0-9c57-4476-99f4-e837b5277f1a"). InnerVolumeSpecName "kube-api-access-qrhvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.759062 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cf6bdbe0-9c57-4476-99f4-e837b5277f1a" (UID: "cf6bdbe0-9c57-4476-99f4-e837b5277f1a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.761460 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfac2f0-2a56-43ab-b248-a9c523e41856-kube-api-access-dphl8" (OuterVolumeSpecName: "kube-api-access-dphl8") pod "bdfac2f0-2a56-43ab-b248-a9c523e41856" (UID: "bdfac2f0-2a56-43ab-b248-a9c523e41856"). InnerVolumeSpecName "kube-api-access-dphl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.851987 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-serving-cert\") pod \"route-controller-manager-86dcc5664b-hhvw8\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.854684 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkvvs\" (UniqueName: \"kubernetes.io/projected/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-kube-api-access-tkvvs\") pod \"route-controller-manager-86dcc5664b-hhvw8\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.854822 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-client-ca\") pod \"route-controller-manager-86dcc5664b-hhvw8\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.854867 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-config\") pod \"route-controller-manager-86dcc5664b-hhvw8\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.854971 4730 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bdfac2f0-2a56-43ab-b248-a9c523e41856-serviceca\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.854984 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dphl8\" (UniqueName: \"kubernetes.io/projected/bdfac2f0-2a56-43ab-b248-a9c523e41856-kube-api-access-dphl8\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.854995 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.855003 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.855014 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrhvf\" (UniqueName: \"kubernetes.io/projected/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-kube-api-access-qrhvf\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.855023 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf6bdbe0-9c57-4476-99f4-e837b5277f1a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.856264 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-config\") pod \"route-controller-manager-86dcc5664b-hhvw8\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.857925 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-client-ca\") pod \"route-controller-manager-86dcc5664b-hhvw8\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.864218 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-serving-cert\") pod \"route-controller-manager-86dcc5664b-hhvw8\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.874679 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkvvs\" (UniqueName: \"kubernetes.io/projected/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-kube-api-access-tkvvs\") pod \"route-controller-manager-86dcc5664b-hhvw8\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:51 crc kubenswrapper[4730]: I0221 00:09:51.904142 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv"] Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.020249 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.024098 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" event={"ID":"88630efb-6328-414d-8847-a014256256bf","Type":"ContainerStarted","Data":"7a482595f4deaf8a5675bc73285d207d90637797408e475f2b07a90ad022d414"} Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.046254 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffq6b" event={"ID":"3ed145f4-5a44-4d9c-8287-a9273b31559a","Type":"ContainerStarted","Data":"eb71eb927f3e43e0ab56b1fa6fdb01f2bbc8322be7db6aaa32bb6afc8627c8af"} Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.083377 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgmxz" event={"ID":"6db4a3c0-933e-4bac-b03c-46e8631ab467","Type":"ContainerStarted","Data":"74dee4756b7d0bb5a78b82fb273d749012b5b1aa0ab71b6e6b611634110c29b8"} Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.089520 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bcmp" event={"ID":"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e","Type":"ContainerStarted","Data":"3c16341614904325f3cfc93b5c01a6d39a1290d554604fff5fa20a9230fa7cdb"} Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.096341 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" event={"ID":"cf6bdbe0-9c57-4476-99f4-e837b5277f1a","Type":"ContainerDied","Data":"e8824d5c11f50cb378e085073ba5f430d8c879aaab5f3546eb9ac380afc6e0c5"} Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.096416 4730 scope.go:117] "RemoveContainer" containerID="dae1c75f4b7c25553a4f9f26ba7048f07206a2f113ee3ddd3f82b58a0dfc8f8f" Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.096643 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8" Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.118770 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29527200-v2vdw" Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.124585 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29527200-v2vdw" event={"ID":"bdfac2f0-2a56-43ab-b248-a9c523e41856","Type":"ContainerDied","Data":"fbf23865a6e0a8528831b9a994ee2a32ab6cdb4dfeb916a1f7c50a49f381eed6"} Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.124678 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf23865a6e0a8528831b9a994ee2a32ab6cdb4dfeb916a1f7c50a49f381eed6" Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.139346 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmq2r" event={"ID":"c5b0a1f8-4598-4fb8-982c-91a3e6699c33","Type":"ContainerStarted","Data":"c73cee1529bd09241d2a01f93de18dfde3461250a9ba54fc7affda6750275a39"} Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.144723 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95vbk" event={"ID":"6298cfe2-11f4-453f-9cfc-63aceb67b191","Type":"ContainerStarted","Data":"daf81fd740b9bc1546e1493a9ecef0bcb9f75bcd823427bbd8e6ee45ea71d2fc"} Feb 21 00:09:52 crc kubenswrapper[4730]: E0221 00:09:52.145520 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kjwnr" podUID="782ae357-da25-45f3-a274-048aa2ffdbf0" Feb 21 00:09:52 crc kubenswrapper[4730]: E0221 00:09:52.145957 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zr5rw" podUID="406888dc-7d00-47e2-8c63-05e0106525e1" Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.251785 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8"] Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.253749 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gcll8"] Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.331216 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8"] Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.521114 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c2cjf"] Feb 21 00:09:52 crc kubenswrapper[4730]: I0221 00:09:52.699723 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf6bdbe0-9c57-4476-99f4-e837b5277f1a" path="/var/lib/kubelet/pods/cf6bdbe0-9c57-4476-99f4-e837b5277f1a/volumes" Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.165456 4730 generic.go:334] "Generic (PLEG): container finished" podID="6298cfe2-11f4-453f-9cfc-63aceb67b191" containerID="daf81fd740b9bc1546e1493a9ecef0bcb9f75bcd823427bbd8e6ee45ea71d2fc" exitCode=0 Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.166023 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95vbk" event={"ID":"6298cfe2-11f4-453f-9cfc-63aceb67b191","Type":"ContainerDied","Data":"daf81fd740b9bc1546e1493a9ecef0bcb9f75bcd823427bbd8e6ee45ea71d2fc"} Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.175206 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" event={"ID":"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b","Type":"ContainerStarted","Data":"9c415a60ea42f38cdceb097769888fbb8c5ca63d6b74583c41fa350cf7cc47d5"} Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.175261 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" event={"ID":"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b","Type":"ContainerStarted","Data":"9fb9f5a2fded6a52e1c5ad480ee49090fa24292f048e9ad940f1a1fe419b0149"} Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.175964 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.184028 4730 generic.go:334] "Generic (PLEG): container finished" podID="e8b8d7fe-e435-40cd-80fd-3595610bca8f" containerID="c130c161581523e1ff124ccf286979b7bfc3020d2dff4f007ec0c5426297783e" exitCode=0 Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.184083 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjcl9" event={"ID":"e8b8d7fe-e435-40cd-80fd-3595610bca8f","Type":"ContainerDied","Data":"c130c161581523e1ff124ccf286979b7bfc3020d2dff4f007ec0c5426297783e"} Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.187742 4730 generic.go:334] "Generic (PLEG): container finished" podID="6db4a3c0-933e-4bac-b03c-46e8631ab467" containerID="74dee4756b7d0bb5a78b82fb273d749012b5b1aa0ab71b6e6b611634110c29b8" exitCode=0 Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.187817 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgmxz" event={"ID":"6db4a3c0-933e-4bac-b03c-46e8631ab467","Type":"ContainerDied","Data":"74dee4756b7d0bb5a78b82fb273d749012b5b1aa0ab71b6e6b611634110c29b8"} Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.190522 4730 generic.go:334] "Generic (PLEG): container finished" podID="b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" containerID="3c16341614904325f3cfc93b5c01a6d39a1290d554604fff5fa20a9230fa7cdb" exitCode=0 Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.190594 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bcmp" event={"ID":"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e","Type":"ContainerDied","Data":"3c16341614904325f3cfc93b5c01a6d39a1290d554604fff5fa20a9230fa7cdb"} Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.192747 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" event={"ID":"88630efb-6328-414d-8847-a014256256bf","Type":"ContainerStarted","Data":"a9c0af85bb2c5b5ada9e9cb75dc3315a06b4da91c0eeab1d7fef9aed71984d6b"} Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.193011 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.194793 4730 generic.go:334] "Generic (PLEG): container finished" podID="3ed145f4-5a44-4d9c-8287-a9273b31559a" containerID="eb71eb927f3e43e0ab56b1fa6fdb01f2bbc8322be7db6aaa32bb6afc8627c8af" exitCode=0 Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.195287 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffq6b" event={"ID":"3ed145f4-5a44-4d9c-8287-a9273b31559a","Type":"ContainerDied","Data":"eb71eb927f3e43e0ab56b1fa6fdb01f2bbc8322be7db6aaa32bb6afc8627c8af"} Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.200838 4730 generic.go:334] "Generic (PLEG): container finished" podID="c5b0a1f8-4598-4fb8-982c-91a3e6699c33" containerID="c73cee1529bd09241d2a01f93de18dfde3461250a9ba54fc7affda6750275a39" exitCode=0 Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.201408 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmq2r" event={"ID":"c5b0a1f8-4598-4fb8-982c-91a3e6699c33","Type":"ContainerDied","Data":"c73cee1529bd09241d2a01f93de18dfde3461250a9ba54fc7affda6750275a39"} Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.203157 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.208315 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.227431 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" podStartSLOduration=18.227406864 podStartE2EDuration="18.227406864s" podCreationTimestamp="2026-02-21 00:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:53.215410348 +0000 UTC m=+195.226977283" watchObservedRunningTime="2026-02-21 00:09:53.227406864 +0000 UTC m=+195.238973799" Feb 21 00:09:53 crc kubenswrapper[4730]: I0221 00:09:53.350834 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" podStartSLOduration=18.350818733 podStartE2EDuration="18.350818733s" podCreationTimestamp="2026-02-21 00:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:53.347391834 +0000 UTC m=+195.358958769" watchObservedRunningTime="2026-02-21 00:09:53.350818733 +0000 UTC m=+195.362385668" Feb 21 00:09:54 crc kubenswrapper[4730]: I0221 00:09:54.323037 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:09:54 crc kubenswrapper[4730]: I0221 00:09:54.323116 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:09:55 crc kubenswrapper[4730]: I0221 00:09:55.769501 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv"] Feb 21 00:09:55 crc kubenswrapper[4730]: I0221 00:09:55.866706 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8"] Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.221236 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmq2r" event={"ID":"c5b0a1f8-4598-4fb8-982c-91a3e6699c33","Type":"ContainerStarted","Data":"839ec09805f019b2e1bcc08a430d93e8386196791db6bc3f1158dda8e1d5d6d4"} Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.223621 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95vbk" event={"ID":"6298cfe2-11f4-453f-9cfc-63aceb67b191","Type":"ContainerStarted","Data":"1ada47546cd308b165d5699a80e248e3a00c10d5b8ee2733db6a32a59e1549b8"} Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.225838 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjcl9" event={"ID":"e8b8d7fe-e435-40cd-80fd-3595610bca8f","Type":"ContainerStarted","Data":"05dadea4f3707fbf61f5a5b11e649f3a57cef760fa1f907355e71f5538cf8698"} Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.227966 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgmxz" event={"ID":"6db4a3c0-933e-4bac-b03c-46e8631ab467","Type":"ContainerStarted","Data":"cfcfea6af6d72c0daa73ca19af91e978366c96fb7bfbd4f2676c8966fc96447e"} Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.230214 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffq6b" event={"ID":"3ed145f4-5a44-4d9c-8287-a9273b31559a","Type":"ContainerStarted","Data":"9427f8a1cbf6641162ddeb42f1bc7e5b8b120cc7cc60b3e210ecca06aeda4377"} Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.232734 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bcmp" event={"ID":"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e","Type":"ContainerStarted","Data":"fe61596433f0507f22c2746cfbec29fedbd7d0c5a916cded44b8ec2941ab984a"} Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.232813 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" podUID="85de07de-c9fa-44b9-bbbe-2d8a00a3f50b" containerName="route-controller-manager" containerID="cri-o://9c415a60ea42f38cdceb097769888fbb8c5ca63d6b74583c41fa350cf7cc47d5" gracePeriod=30 Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.233018 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" podUID="88630efb-6328-414d-8847-a014256256bf" containerName="controller-manager" containerID="cri-o://a9c0af85bb2c5b5ada9e9cb75dc3315a06b4da91c0eeab1d7fef9aed71984d6b" gracePeriod=30 Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.242778 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pmq2r" podStartSLOduration=3.22860237 podStartE2EDuration="37.242756963s" podCreationTimestamp="2026-02-21 00:09:19 +0000 UTC" firstStartedPulling="2026-02-21 00:09:21.556892885 +0000 UTC m=+163.568459820" lastFinishedPulling="2026-02-21 00:09:55.571047478 +0000 UTC m=+197.582614413" observedRunningTime="2026-02-21 00:09:56.241199271 +0000 UTC m=+198.252766206" watchObservedRunningTime="2026-02-21 00:09:56.242756963 +0000 UTC m=+198.254323898" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.266752 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ffq6b" podStartSLOduration=2.993071581 podStartE2EDuration="36.266732346s" podCreationTimestamp="2026-02-21 00:09:20 +0000 UTC" firstStartedPulling="2026-02-21 00:09:22.609221527 +0000 UTC m=+164.620788462" lastFinishedPulling="2026-02-21 00:09:55.882882292 +0000 UTC m=+197.894449227" observedRunningTime="2026-02-21 00:09:56.265255056 +0000 UTC m=+198.276822011" watchObservedRunningTime="2026-02-21 00:09:56.266732346 +0000 UTC m=+198.278299281" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.284852 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-95vbk" podStartSLOduration=3.09639832 podStartE2EDuration="39.28482837s" podCreationTimestamp="2026-02-21 00:09:17 +0000 UTC" firstStartedPulling="2026-02-21 00:09:19.42631821 +0000 UTC m=+161.437885145" lastFinishedPulling="2026-02-21 00:09:55.61474823 +0000 UTC m=+197.626315195" observedRunningTime="2026-02-21 00:09:56.281645622 +0000 UTC m=+198.293212567" watchObservedRunningTime="2026-02-21 00:09:56.28482837 +0000 UTC m=+198.296395305" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.310549 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rjcl9" podStartSLOduration=3.1575824949999998 podStartE2EDuration="35.310532501s" podCreationTimestamp="2026-02-21 00:09:21 +0000 UTC" firstStartedPulling="2026-02-21 00:09:23.616669635 +0000 UTC m=+165.628236570" lastFinishedPulling="2026-02-21 00:09:55.769619641 +0000 UTC m=+197.781186576" observedRunningTime="2026-02-21 00:09:56.307417675 +0000 UTC m=+198.318984610" watchObservedRunningTime="2026-02-21 00:09:56.310532501 +0000 UTC m=+198.322099436" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.333217 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sgmxz" podStartSLOduration=3.6127502700000003 podStartE2EDuration="38.333201379s" podCreationTimestamp="2026-02-21 00:09:18 +0000 UTC" firstStartedPulling="2026-02-21 00:09:20.441170808 +0000 UTC m=+162.452737743" lastFinishedPulling="2026-02-21 00:09:55.161621917 +0000 UTC m=+197.173188852" observedRunningTime="2026-02-21 00:09:56.332170075 +0000 UTC m=+198.343737010" watchObservedRunningTime="2026-02-21 00:09:56.333201379 +0000 UTC m=+198.344768314" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.356334 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6bcmp" podStartSLOduration=2.09987569 podStartE2EDuration="36.356314663s" podCreationTimestamp="2026-02-21 00:09:20 +0000 UTC" firstStartedPulling="2026-02-21 00:09:21.557333175 +0000 UTC m=+163.568900110" lastFinishedPulling="2026-02-21 00:09:55.813772148 +0000 UTC m=+197.825339083" observedRunningTime="2026-02-21 00:09:56.35269179 +0000 UTC m=+198.364258725" watchObservedRunningTime="2026-02-21 00:09:56.356314663 +0000 UTC m=+198.367881598" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.669441 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.722395 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkvvs\" (UniqueName: \"kubernetes.io/projected/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-kube-api-access-tkvvs\") pod \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.722485 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-config\") pod \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.722532 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-client-ca\") pod \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.722560 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-serving-cert\") pod \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\" (UID: \"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b\") " Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.726316 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-client-ca" (OuterVolumeSpecName: "client-ca") pod "85de07de-c9fa-44b9-bbbe-2d8a00a3f50b" (UID: "85de07de-c9fa-44b9-bbbe-2d8a00a3f50b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.727346 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-config" (OuterVolumeSpecName: "config") pod "85de07de-c9fa-44b9-bbbe-2d8a00a3f50b" (UID: "85de07de-c9fa-44b9-bbbe-2d8a00a3f50b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.731106 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-kube-api-access-tkvvs" (OuterVolumeSpecName: "kube-api-access-tkvvs") pod "85de07de-c9fa-44b9-bbbe-2d8a00a3f50b" (UID: "85de07de-c9fa-44b9-bbbe-2d8a00a3f50b"). InnerVolumeSpecName "kube-api-access-tkvvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.744014 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85de07de-c9fa-44b9-bbbe-2d8a00a3f50b" (UID: "85de07de-c9fa-44b9-bbbe-2d8a00a3f50b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.824078 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkvvs\" (UniqueName: \"kubernetes.io/projected/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-kube-api-access-tkvvs\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.824125 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.824141 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.824152 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.882313 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.924879 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-client-ca\") pod \"88630efb-6328-414d-8847-a014256256bf\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.924983 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-proxy-ca-bundles\") pod \"88630efb-6328-414d-8847-a014256256bf\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.925029 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88630efb-6328-414d-8847-a014256256bf-serving-cert\") pod \"88630efb-6328-414d-8847-a014256256bf\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.925087 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ws8n\" (UniqueName: \"kubernetes.io/projected/88630efb-6328-414d-8847-a014256256bf-kube-api-access-5ws8n\") pod \"88630efb-6328-414d-8847-a014256256bf\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.925525 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "88630efb-6328-414d-8847-a014256256bf" (UID: "88630efb-6328-414d-8847-a014256256bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.925638 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "88630efb-6328-414d-8847-a014256256bf" (UID: "88630efb-6328-414d-8847-a014256256bf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.925888 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-config" (OuterVolumeSpecName: "config") pod "88630efb-6328-414d-8847-a014256256bf" (UID: "88630efb-6328-414d-8847-a014256256bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.926104 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-config\") pod \"88630efb-6328-414d-8847-a014256256bf\" (UID: \"88630efb-6328-414d-8847-a014256256bf\") " Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.926361 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.926385 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.926400 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88630efb-6328-414d-8847-a014256256bf-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.930046 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88630efb-6328-414d-8847-a014256256bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88630efb-6328-414d-8847-a014256256bf" (UID: "88630efb-6328-414d-8847-a014256256bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:09:56 crc kubenswrapper[4730]: I0221 00:09:56.930099 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88630efb-6328-414d-8847-a014256256bf-kube-api-access-5ws8n" (OuterVolumeSpecName: "kube-api-access-5ws8n") pod "88630efb-6328-414d-8847-a014256256bf" (UID: "88630efb-6328-414d-8847-a014256256bf"). InnerVolumeSpecName "kube-api-access-5ws8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.027487 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88630efb-6328-414d-8847-a014256256bf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.027524 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ws8n\" (UniqueName: \"kubernetes.io/projected/88630efb-6328-414d-8847-a014256256bf-kube-api-access-5ws8n\") on node \"crc\" DevicePath \"\"" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.196398 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t"] Feb 21 00:09:57 crc kubenswrapper[4730]: E0221 00:09:57.196666 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85de07de-c9fa-44b9-bbbe-2d8a00a3f50b" containerName="route-controller-manager" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.196690 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="85de07de-c9fa-44b9-bbbe-2d8a00a3f50b" containerName="route-controller-manager" Feb 21 00:09:57 crc kubenswrapper[4730]: E0221 00:09:57.196702 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfac2f0-2a56-43ab-b248-a9c523e41856" containerName="image-pruner" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.196711 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfac2f0-2a56-43ab-b248-a9c523e41856" containerName="image-pruner" Feb 21 00:09:57 crc kubenswrapper[4730]: E0221 00:09:57.196722 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88630efb-6328-414d-8847-a014256256bf" containerName="controller-manager" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.196730 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="88630efb-6328-414d-8847-a014256256bf" containerName="controller-manager" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.196842 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="85de07de-c9fa-44b9-bbbe-2d8a00a3f50b" containerName="route-controller-manager" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.196856 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="88630efb-6328-414d-8847-a014256256bf" containerName="controller-manager" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.196872 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfac2f0-2a56-43ab-b248-a9c523e41856" containerName="image-pruner" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.197309 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.202856 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cd5984f6b-v9n8n"] Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.203681 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.222554 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cd5984f6b-v9n8n"] Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.226211 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t"] Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.230714 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-config\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.230746 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-proxy-ca-bundles\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.230770 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rnxw\" (UniqueName: \"kubernetes.io/projected/48c30298-f580-4266-bc17-b6285bf37df6-kube-api-access-6rnxw\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.230787 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44b93af8-6feb-4c92-bf65-7a53bd3baac8-client-ca\") pod \"route-controller-manager-757bd57dc4-ldw9t\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.230803 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-client-ca\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.230821 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44b93af8-6feb-4c92-bf65-7a53bd3baac8-config\") pod \"route-controller-manager-757bd57dc4-ldw9t\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.230844 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sp5q\" (UniqueName: \"kubernetes.io/projected/44b93af8-6feb-4c92-bf65-7a53bd3baac8-kube-api-access-7sp5q\") pod \"route-controller-manager-757bd57dc4-ldw9t\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.230860 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44b93af8-6feb-4c92-bf65-7a53bd3baac8-serving-cert\") pod \"route-controller-manager-757bd57dc4-ldw9t\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.230879 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48c30298-f580-4266-bc17-b6285bf37df6-serving-cert\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.261779 4730 generic.go:334] "Generic (PLEG): container finished" podID="85de07de-c9fa-44b9-bbbe-2d8a00a3f50b" containerID="9c415a60ea42f38cdceb097769888fbb8c5ca63d6b74583c41fa350cf7cc47d5" exitCode=0 Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.261850 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" event={"ID":"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b","Type":"ContainerDied","Data":"9c415a60ea42f38cdceb097769888fbb8c5ca63d6b74583c41fa350cf7cc47d5"} Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.261878 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" event={"ID":"85de07de-c9fa-44b9-bbbe-2d8a00a3f50b","Type":"ContainerDied","Data":"9fb9f5a2fded6a52e1c5ad480ee49090fa24292f048e9ad940f1a1fe419b0149"} Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.261896 4730 scope.go:117] "RemoveContainer" containerID="9c415a60ea42f38cdceb097769888fbb8c5ca63d6b74583c41fa350cf7cc47d5" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.262044 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.265465 4730 generic.go:334] "Generic (PLEG): container finished" podID="88630efb-6328-414d-8847-a014256256bf" containerID="a9c0af85bb2c5b5ada9e9cb75dc3315a06b4da91c0eeab1d7fef9aed71984d6b" exitCode=0 Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.266134 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" event={"ID":"88630efb-6328-414d-8847-a014256256bf","Type":"ContainerDied","Data":"a9c0af85bb2c5b5ada9e9cb75dc3315a06b4da91c0eeab1d7fef9aed71984d6b"} Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.266162 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" event={"ID":"88630efb-6328-414d-8847-a014256256bf","Type":"ContainerDied","Data":"7a482595f4deaf8a5675bc73285d207d90637797408e475f2b07a90ad022d414"} Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.266415 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.291620 4730 scope.go:117] "RemoveContainer" containerID="9c415a60ea42f38cdceb097769888fbb8c5ca63d6b74583c41fa350cf7cc47d5" Feb 21 00:09:57 crc kubenswrapper[4730]: E0221 00:09:57.293195 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c415a60ea42f38cdceb097769888fbb8c5ca63d6b74583c41fa350cf7cc47d5\": container with ID starting with 9c415a60ea42f38cdceb097769888fbb8c5ca63d6b74583c41fa350cf7cc47d5 not found: ID does not exist" containerID="9c415a60ea42f38cdceb097769888fbb8c5ca63d6b74583c41fa350cf7cc47d5" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.293233 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c415a60ea42f38cdceb097769888fbb8c5ca63d6b74583c41fa350cf7cc47d5"} err="failed to get container status \"9c415a60ea42f38cdceb097769888fbb8c5ca63d6b74583c41fa350cf7cc47d5\": rpc error: code = NotFound desc = could not find container \"9c415a60ea42f38cdceb097769888fbb8c5ca63d6b74583c41fa350cf7cc47d5\": container with ID starting with 9c415a60ea42f38cdceb097769888fbb8c5ca63d6b74583c41fa350cf7cc47d5 not found: ID does not exist" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.293272 4730 scope.go:117] "RemoveContainer" containerID="a9c0af85bb2c5b5ada9e9cb75dc3315a06b4da91c0eeab1d7fef9aed71984d6b" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.300596 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8"] Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.304558 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86dcc5664b-hhvw8"] Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.314470 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv"] Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.317720 4730 scope.go:117] "RemoveContainer" containerID="a9c0af85bb2c5b5ada9e9cb75dc3315a06b4da91c0eeab1d7fef9aed71984d6b" Feb 21 00:09:57 crc kubenswrapper[4730]: E0221 00:09:57.318296 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c0af85bb2c5b5ada9e9cb75dc3315a06b4da91c0eeab1d7fef9aed71984d6b\": container with ID starting with a9c0af85bb2c5b5ada9e9cb75dc3315a06b4da91c0eeab1d7fef9aed71984d6b not found: ID does not exist" containerID="a9c0af85bb2c5b5ada9e9cb75dc3315a06b4da91c0eeab1d7fef9aed71984d6b" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.318326 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c0af85bb2c5b5ada9e9cb75dc3315a06b4da91c0eeab1d7fef9aed71984d6b"} err="failed to get container status \"a9c0af85bb2c5b5ada9e9cb75dc3315a06b4da91c0eeab1d7fef9aed71984d6b\": rpc error: code = NotFound desc = could not find container \"a9c0af85bb2c5b5ada9e9cb75dc3315a06b4da91c0eeab1d7fef9aed71984d6b\": container with ID starting with a9c0af85bb2c5b5ada9e9cb75dc3315a06b4da91c0eeab1d7fef9aed71984d6b not found: ID does not exist" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.320557 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9bbd7f8f4-fsvbv"] Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.332546 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-config\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.332598 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-proxy-ca-bundles\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.332628 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rnxw\" (UniqueName: \"kubernetes.io/projected/48c30298-f580-4266-bc17-b6285bf37df6-kube-api-access-6rnxw\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.332653 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44b93af8-6feb-4c92-bf65-7a53bd3baac8-client-ca\") pod \"route-controller-manager-757bd57dc4-ldw9t\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.332676 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-client-ca\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.332699 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44b93af8-6feb-4c92-bf65-7a53bd3baac8-config\") pod \"route-controller-manager-757bd57dc4-ldw9t\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.332747 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sp5q\" (UniqueName: \"kubernetes.io/projected/44b93af8-6feb-4c92-bf65-7a53bd3baac8-kube-api-access-7sp5q\") pod \"route-controller-manager-757bd57dc4-ldw9t\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.332799 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44b93af8-6feb-4c92-bf65-7a53bd3baac8-serving-cert\") pod \"route-controller-manager-757bd57dc4-ldw9t\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.332875 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48c30298-f580-4266-bc17-b6285bf37df6-serving-cert\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.334083 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-client-ca\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.334189 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44b93af8-6feb-4c92-bf65-7a53bd3baac8-config\") pod \"route-controller-manager-757bd57dc4-ldw9t\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.334209 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-config\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.334224 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-proxy-ca-bundles\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.334848 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44b93af8-6feb-4c92-bf65-7a53bd3baac8-client-ca\") pod \"route-controller-manager-757bd57dc4-ldw9t\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.341049 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48c30298-f580-4266-bc17-b6285bf37df6-serving-cert\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.353727 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44b93af8-6feb-4c92-bf65-7a53bd3baac8-serving-cert\") pod \"route-controller-manager-757bd57dc4-ldw9t\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.353816 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rnxw\" (UniqueName: \"kubernetes.io/projected/48c30298-f580-4266-bc17-b6285bf37df6-kube-api-access-6rnxw\") pod \"controller-manager-cd5984f6b-v9n8n\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.357522 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sp5q\" (UniqueName: \"kubernetes.io/projected/44b93af8-6feb-4c92-bf65-7a53bd3baac8-kube-api-access-7sp5q\") pod \"route-controller-manager-757bd57dc4-ldw9t\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.514813 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.522980 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:57 crc kubenswrapper[4730]: I0221 00:09:57.819343 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t"] Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.033015 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.033405 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.076385 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cd5984f6b-v9n8n"] Feb 21 00:09:58 crc kubenswrapper[4730]: W0221 00:09:58.079716 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48c30298_f580_4266_bc17_b6285bf37df6.slice/crio-3cf699d6385cf5736dd8913ed3729b5acbdadc91eece17042e350dcec2323bb4 WatchSource:0}: Error finding container 3cf699d6385cf5736dd8913ed3729b5acbdadc91eece17042e350dcec2323bb4: Status 404 returned error can't find the container with id 3cf699d6385cf5736dd8913ed3729b5acbdadc91eece17042e350dcec2323bb4 Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.282041 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" event={"ID":"48c30298-f580-4266-bc17-b6285bf37df6","Type":"ContainerStarted","Data":"425d2e2e7e8d5f054bc3380b929922e232f22a9391baffb27adcdc401b3af00d"} Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.282274 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" event={"ID":"48c30298-f580-4266-bc17-b6285bf37df6","Type":"ContainerStarted","Data":"3cf699d6385cf5736dd8913ed3729b5acbdadc91eece17042e350dcec2323bb4"} Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.284659 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" event={"ID":"44b93af8-6feb-4c92-bf65-7a53bd3baac8","Type":"ContainerStarted","Data":"141945e352de594bf3f76de73cfdbe4158bd65f375e20d134853f936ff8947f4"} Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.284690 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" event={"ID":"44b93af8-6feb-4c92-bf65-7a53bd3baac8","Type":"ContainerStarted","Data":"90e217f9bbe1df5a24568ca188f6f665bd98ec5864e7d94001f47f9acd82e7b8"} Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.284992 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.296639 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" podStartSLOduration=3.296624211 podStartE2EDuration="3.296624211s" podCreationTimestamp="2026-02-21 00:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:58.295566405 +0000 UTC m=+200.307133350" watchObservedRunningTime="2026-02-21 00:09:58.296624211 +0000 UTC m=+200.308191146" Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.314438 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" podStartSLOduration=3.314419853 podStartE2EDuration="3.314419853s" podCreationTimestamp="2026-02-21 00:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:09:58.31284255 +0000 UTC m=+200.324409485" watchObservedRunningTime="2026-02-21 00:09:58.314419853 +0000 UTC m=+200.325986788" Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.505624 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.505675 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.557261 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.700036 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85de07de-c9fa-44b9-bbbe-2d8a00a3f50b" path="/var/lib/kubelet/pods/85de07de-c9fa-44b9-bbbe-2d8a00a3f50b/volumes" Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.700690 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88630efb-6328-414d-8847-a014256256bf" path="/var/lib/kubelet/pods/88630efb-6328-414d-8847-a014256256bf/volumes" Feb 21 00:09:58 crc kubenswrapper[4730]: I0221 00:09:58.710043 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:09:59 crc kubenswrapper[4730]: I0221 00:09:59.262362 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-95vbk" podUID="6298cfe2-11f4-453f-9cfc-63aceb67b191" containerName="registry-server" probeResult="failure" output=< Feb 21 00:09:59 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Feb 21 00:09:59 crc kubenswrapper[4730]: > Feb 21 00:09:59 crc kubenswrapper[4730]: I0221 00:09:59.299664 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:09:59 crc kubenswrapper[4730]: I0221 00:09:59.307973 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.068117 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.068481 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.113684 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.355497 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.461097 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.461140 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.493606 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.494546 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.497322 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.497534 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.502233 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.506025 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.575324 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bc5e8bf-866b-4924-8368-b03aab4436c2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bc5e8bf-866b-4924-8368-b03aab4436c2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.575402 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bc5e8bf-866b-4924-8368-b03aab4436c2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bc5e8bf-866b-4924-8368-b03aab4436c2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.677154 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bc5e8bf-866b-4924-8368-b03aab4436c2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bc5e8bf-866b-4924-8368-b03aab4436c2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.677237 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bc5e8bf-866b-4924-8368-b03aab4436c2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bc5e8bf-866b-4924-8368-b03aab4436c2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.677310 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bc5e8bf-866b-4924-8368-b03aab4436c2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bc5e8bf-866b-4924-8368-b03aab4436c2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.698516 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bc5e8bf-866b-4924-8368-b03aab4436c2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bc5e8bf-866b-4924-8368-b03aab4436c2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 00:10:00 crc kubenswrapper[4730]: I0221 00:10:00.815664 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 00:10:01 crc kubenswrapper[4730]: I0221 00:10:01.071299 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:10:01 crc kubenswrapper[4730]: I0221 00:10:01.072130 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:10:01 crc kubenswrapper[4730]: I0221 00:10:01.286448 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 21 00:10:01 crc kubenswrapper[4730]: W0221 00:10:01.298598 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9bc5e8bf_866b_4924_8368_b03aab4436c2.slice/crio-f0a1d6a7076dea1b4396d2751af1cf0018619f027384613fcfe8a99e5b0f19f0 WatchSource:0}: Error finding container f0a1d6a7076dea1b4396d2751af1cf0018619f027384613fcfe8a99e5b0f19f0: Status 404 returned error can't find the container with id f0a1d6a7076dea1b4396d2751af1cf0018619f027384613fcfe8a99e5b0f19f0 Feb 21 00:10:01 crc kubenswrapper[4730]: I0221 00:10:01.308806 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9bc5e8bf-866b-4924-8368-b03aab4436c2","Type":"ContainerStarted","Data":"f0a1d6a7076dea1b4396d2751af1cf0018619f027384613fcfe8a99e5b0f19f0"} Feb 21 00:10:01 crc kubenswrapper[4730]: I0221 00:10:01.356446 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:10:01 crc kubenswrapper[4730]: I0221 00:10:01.471085 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:10:01 crc kubenswrapper[4730]: I0221 00:10:01.471133 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:10:02 crc kubenswrapper[4730]: I0221 00:10:02.124850 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ffq6b" podUID="3ed145f4-5a44-4d9c-8287-a9273b31559a" containerName="registry-server" probeResult="failure" output=< Feb 21 00:10:02 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Feb 21 00:10:02 crc kubenswrapper[4730]: > Feb 21 00:10:02 crc kubenswrapper[4730]: I0221 00:10:02.298522 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bcmp"] Feb 21 00:10:02 crc kubenswrapper[4730]: I0221 00:10:02.313338 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9bc5e8bf-866b-4924-8368-b03aab4436c2","Type":"ContainerStarted","Data":"c9675192e383225abe6e6e38e3807a4bc02e9e1494d2b8116c182f1b326477ed"} Feb 21 00:10:02 crc kubenswrapper[4730]: I0221 00:10:02.516422 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rjcl9" podUID="e8b8d7fe-e435-40cd-80fd-3595610bca8f" containerName="registry-server" probeResult="failure" output=< Feb 21 00:10:02 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Feb 21 00:10:02 crc kubenswrapper[4730]: > Feb 21 00:10:03 crc kubenswrapper[4730]: I0221 00:10:03.321203 4730 generic.go:334] "Generic (PLEG): container finished" podID="9bc5e8bf-866b-4924-8368-b03aab4436c2" containerID="c9675192e383225abe6e6e38e3807a4bc02e9e1494d2b8116c182f1b326477ed" exitCode=0 Feb 21 00:10:03 crc kubenswrapper[4730]: I0221 00:10:03.321351 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9bc5e8bf-866b-4924-8368-b03aab4436c2","Type":"ContainerDied","Data":"c9675192e383225abe6e6e38e3807a4bc02e9e1494d2b8116c182f1b326477ed"} Feb 21 00:10:03 crc kubenswrapper[4730]: I0221 00:10:03.321410 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6bcmp" podUID="b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" containerName="registry-server" containerID="cri-o://fe61596433f0507f22c2746cfbec29fedbd7d0c5a916cded44b8ec2941ab984a" gracePeriod=2 Feb 21 00:10:04 crc kubenswrapper[4730]: I0221 00:10:04.624245 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 00:10:04 crc kubenswrapper[4730]: I0221 00:10:04.731404 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bc5e8bf-866b-4924-8368-b03aab4436c2-kube-api-access\") pod \"9bc5e8bf-866b-4924-8368-b03aab4436c2\" (UID: \"9bc5e8bf-866b-4924-8368-b03aab4436c2\") " Feb 21 00:10:04 crc kubenswrapper[4730]: I0221 00:10:04.731461 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bc5e8bf-866b-4924-8368-b03aab4436c2-kubelet-dir\") pod \"9bc5e8bf-866b-4924-8368-b03aab4436c2\" (UID: \"9bc5e8bf-866b-4924-8368-b03aab4436c2\") " Feb 21 00:10:04 crc kubenswrapper[4730]: I0221 00:10:04.731651 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bc5e8bf-866b-4924-8368-b03aab4436c2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9bc5e8bf-866b-4924-8368-b03aab4436c2" (UID: "9bc5e8bf-866b-4924-8368-b03aab4436c2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:10:04 crc kubenswrapper[4730]: I0221 00:10:04.732051 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bc5e8bf-866b-4924-8368-b03aab4436c2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:04 crc kubenswrapper[4730]: I0221 00:10:04.746240 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc5e8bf-866b-4924-8368-b03aab4436c2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9bc5e8bf-866b-4924-8368-b03aab4436c2" (UID: "9bc5e8bf-866b-4924-8368-b03aab4436c2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:04 crc kubenswrapper[4730]: I0221 00:10:04.833302 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bc5e8bf-866b-4924-8368-b03aab4436c2-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.257343 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.334295 4730 generic.go:334] "Generic (PLEG): container finished" podID="b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" containerID="fe61596433f0507f22c2746cfbec29fedbd7d0c5a916cded44b8ec2941ab984a" exitCode=0 Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.334361 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bcmp" event={"ID":"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e","Type":"ContainerDied","Data":"fe61596433f0507f22c2746cfbec29fedbd7d0c5a916cded44b8ec2941ab984a"} Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.334371 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bcmp" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.334387 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bcmp" event={"ID":"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e","Type":"ContainerDied","Data":"42d1f7a96b506460535038c73adfbdf016f348e88fbe546d09ebfeb0b60f3fa7"} Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.334405 4730 scope.go:117] "RemoveContainer" containerID="fe61596433f0507f22c2746cfbec29fedbd7d0c5a916cded44b8ec2941ab984a" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.335977 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9bc5e8bf-866b-4924-8368-b03aab4436c2","Type":"ContainerDied","Data":"f0a1d6a7076dea1b4396d2751af1cf0018619f027384613fcfe8a99e5b0f19f0"} Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.335999 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.336014 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0a1d6a7076dea1b4396d2751af1cf0018619f027384613fcfe8a99e5b0f19f0" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.340223 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-catalog-content\") pod \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\" (UID: \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\") " Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.340281 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-utilities\") pod \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\" (UID: \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\") " Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.340329 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdgdq\" (UniqueName: \"kubernetes.io/projected/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-kube-api-access-wdgdq\") pod \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\" (UID: \"b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e\") " Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.341141 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-utilities" (OuterVolumeSpecName: "utilities") pod "b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" (UID: "b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.346413 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-kube-api-access-wdgdq" (OuterVolumeSpecName: "kube-api-access-wdgdq") pod "b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" (UID: "b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e"). InnerVolumeSpecName "kube-api-access-wdgdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.360031 4730 scope.go:117] "RemoveContainer" containerID="3c16341614904325f3cfc93b5c01a6d39a1290d554604fff5fa20a9230fa7cdb" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.373541 4730 scope.go:117] "RemoveContainer" containerID="9085fb2d924ff2c45d6f207422fa7984e98414e67a76762e77ef8cd787c127bf" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.375726 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" (UID: "b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.390767 4730 scope.go:117] "RemoveContainer" containerID="fe61596433f0507f22c2746cfbec29fedbd7d0c5a916cded44b8ec2941ab984a" Feb 21 00:10:05 crc kubenswrapper[4730]: E0221 00:10:05.391194 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe61596433f0507f22c2746cfbec29fedbd7d0c5a916cded44b8ec2941ab984a\": container with ID starting with fe61596433f0507f22c2746cfbec29fedbd7d0c5a916cded44b8ec2941ab984a not found: ID does not exist" containerID="fe61596433f0507f22c2746cfbec29fedbd7d0c5a916cded44b8ec2941ab984a" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.391295 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe61596433f0507f22c2746cfbec29fedbd7d0c5a916cded44b8ec2941ab984a"} err="failed to get container status \"fe61596433f0507f22c2746cfbec29fedbd7d0c5a916cded44b8ec2941ab984a\": rpc error: code = NotFound desc = could not find container \"fe61596433f0507f22c2746cfbec29fedbd7d0c5a916cded44b8ec2941ab984a\": container with ID starting with fe61596433f0507f22c2746cfbec29fedbd7d0c5a916cded44b8ec2941ab984a not found: ID does not exist" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.391393 4730 scope.go:117] "RemoveContainer" containerID="3c16341614904325f3cfc93b5c01a6d39a1290d554604fff5fa20a9230fa7cdb" Feb 21 00:10:05 crc kubenswrapper[4730]: E0221 00:10:05.391779 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c16341614904325f3cfc93b5c01a6d39a1290d554604fff5fa20a9230fa7cdb\": container with ID starting with 3c16341614904325f3cfc93b5c01a6d39a1290d554604fff5fa20a9230fa7cdb not found: ID does not exist" containerID="3c16341614904325f3cfc93b5c01a6d39a1290d554604fff5fa20a9230fa7cdb" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.391811 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c16341614904325f3cfc93b5c01a6d39a1290d554604fff5fa20a9230fa7cdb"} err="failed to get container status \"3c16341614904325f3cfc93b5c01a6d39a1290d554604fff5fa20a9230fa7cdb\": rpc error: code = NotFound desc = could not find container \"3c16341614904325f3cfc93b5c01a6d39a1290d554604fff5fa20a9230fa7cdb\": container with ID starting with 3c16341614904325f3cfc93b5c01a6d39a1290d554604fff5fa20a9230fa7cdb not found: ID does not exist" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.391833 4730 scope.go:117] "RemoveContainer" containerID="9085fb2d924ff2c45d6f207422fa7984e98414e67a76762e77ef8cd787c127bf" Feb 21 00:10:05 crc kubenswrapper[4730]: E0221 00:10:05.392247 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9085fb2d924ff2c45d6f207422fa7984e98414e67a76762e77ef8cd787c127bf\": container with ID starting with 9085fb2d924ff2c45d6f207422fa7984e98414e67a76762e77ef8cd787c127bf not found: ID does not exist" containerID="9085fb2d924ff2c45d6f207422fa7984e98414e67a76762e77ef8cd787c127bf" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.392266 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9085fb2d924ff2c45d6f207422fa7984e98414e67a76762e77ef8cd787c127bf"} err="failed to get container status \"9085fb2d924ff2c45d6f207422fa7984e98414e67a76762e77ef8cd787c127bf\": rpc error: code = NotFound desc = could not find container \"9085fb2d924ff2c45d6f207422fa7984e98414e67a76762e77ef8cd787c127bf\": container with ID starting with 9085fb2d924ff2c45d6f207422fa7984e98414e67a76762e77ef8cd787c127bf not found: ID does not exist" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.441456 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.441492 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.441502 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdgdq\" (UniqueName: \"kubernetes.io/projected/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e-kube-api-access-wdgdq\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.664633 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bcmp"] Feb 21 00:10:05 crc kubenswrapper[4730]: I0221 00:10:05.669258 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bcmp"] Feb 21 00:10:06 crc kubenswrapper[4730]: I0221 00:10:06.342158 4730 generic.go:334] "Generic (PLEG): container finished" podID="782ae357-da25-45f3-a274-048aa2ffdbf0" containerID="d690f17dd5abfcac18ade2e828de6cfc98fd6d67a6b6f6a0660641dcf4f1e188" exitCode=0 Feb 21 00:10:06 crc kubenswrapper[4730]: I0221 00:10:06.342200 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjwnr" event={"ID":"782ae357-da25-45f3-a274-048aa2ffdbf0","Type":"ContainerDied","Data":"d690f17dd5abfcac18ade2e828de6cfc98fd6d67a6b6f6a0660641dcf4f1e188"} Feb 21 00:10:06 crc kubenswrapper[4730]: I0221 00:10:06.699847 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" path="/var/lib/kubelet/pods/b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e/volumes" Feb 21 00:10:07 crc kubenswrapper[4730]: I0221 00:10:07.350327 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjwnr" event={"ID":"782ae357-da25-45f3-a274-048aa2ffdbf0","Type":"ContainerStarted","Data":"bc723a40b15b84d282f133293f96edfa2ea2e4dd560b131ec86b5a55b3235f56"} Feb 21 00:10:07 crc kubenswrapper[4730]: I0221 00:10:07.364201 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kjwnr" podStartSLOduration=3.85279991 podStartE2EDuration="50.36418431s" podCreationTimestamp="2026-02-21 00:09:17 +0000 UTC" firstStartedPulling="2026-02-21 00:09:20.446200874 +0000 UTC m=+162.457767809" lastFinishedPulling="2026-02-21 00:10:06.957585274 +0000 UTC m=+208.969152209" observedRunningTime="2026-02-21 00:10:07.363053322 +0000 UTC m=+209.374620257" watchObservedRunningTime="2026-02-21 00:10:07.36418431 +0000 UTC m=+209.375751245" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.077216 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.156558 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.292189 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 21 00:10:08 crc kubenswrapper[4730]: E0221 00:10:08.292607 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" containerName="extract-utilities" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.292671 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" containerName="extract-utilities" Feb 21 00:10:08 crc kubenswrapper[4730]: E0221 00:10:08.292732 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" containerName="registry-server" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.292793 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" containerName="registry-server" Feb 21 00:10:08 crc kubenswrapper[4730]: E0221 00:10:08.292861 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" containerName="extract-content" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.292922 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" containerName="extract-content" Feb 21 00:10:08 crc kubenswrapper[4730]: E0221 00:10:08.293004 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc5e8bf-866b-4924-8368-b03aab4436c2" containerName="pruner" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.293060 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc5e8bf-866b-4924-8368-b03aab4436c2" containerName="pruner" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.293199 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc5e8bf-866b-4924-8368-b03aab4436c2" containerName="pruner" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.293271 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f9e70a-3fc3-4d58-ab8f-d9fcb86f246e" containerName="registry-server" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.293857 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.297553 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.297595 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.300057 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.300090 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.300783 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.376921 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e062c181-c9d3-4cee-8284-27620adeafb0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e062c181-c9d3-4cee-8284-27620adeafb0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.376984 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e062c181-c9d3-4cee-8284-27620adeafb0-kube-api-access\") pod \"installer-9-crc\" (UID: \"e062c181-c9d3-4cee-8284-27620adeafb0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.377002 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e062c181-c9d3-4cee-8284-27620adeafb0-var-lock\") pod \"installer-9-crc\" (UID: \"e062c181-c9d3-4cee-8284-27620adeafb0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.478407 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e062c181-c9d3-4cee-8284-27620adeafb0-var-lock\") pod \"installer-9-crc\" (UID: \"e062c181-c9d3-4cee-8284-27620adeafb0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.478468 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e062c181-c9d3-4cee-8284-27620adeafb0-kube-api-access\") pod \"installer-9-crc\" (UID: \"e062c181-c9d3-4cee-8284-27620adeafb0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.478543 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e062c181-c9d3-4cee-8284-27620adeafb0-var-lock\") pod \"installer-9-crc\" (UID: \"e062c181-c9d3-4cee-8284-27620adeafb0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.478600 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e062c181-c9d3-4cee-8284-27620adeafb0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e062c181-c9d3-4cee-8284-27620adeafb0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.478558 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e062c181-c9d3-4cee-8284-27620adeafb0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e062c181-c9d3-4cee-8284-27620adeafb0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.498661 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e062c181-c9d3-4cee-8284-27620adeafb0-kube-api-access\") pod \"installer-9-crc\" (UID: \"e062c181-c9d3-4cee-8284-27620adeafb0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.549001 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:10:08 crc kubenswrapper[4730]: I0221 00:10:08.630750 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 21 00:10:09 crc kubenswrapper[4730]: I0221 00:10:09.109670 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 21 00:10:09 crc kubenswrapper[4730]: W0221 00:10:09.115112 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode062c181_c9d3_4cee_8284_27620adeafb0.slice/crio-e5dad1220e7ec3aba778be566999fab8fcf3ce790468910d22910f14589646ad WatchSource:0}: Error finding container e5dad1220e7ec3aba778be566999fab8fcf3ce790468910d22910f14589646ad: Status 404 returned error can't find the container with id e5dad1220e7ec3aba778be566999fab8fcf3ce790468910d22910f14589646ad Feb 21 00:10:09 crc kubenswrapper[4730]: I0221 00:10:09.341314 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kjwnr" podUID="782ae357-da25-45f3-a274-048aa2ffdbf0" containerName="registry-server" probeResult="failure" output=< Feb 21 00:10:09 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Feb 21 00:10:09 crc kubenswrapper[4730]: > Feb 21 00:10:09 crc kubenswrapper[4730]: I0221 00:10:09.359630 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e062c181-c9d3-4cee-8284-27620adeafb0","Type":"ContainerStarted","Data":"e5dad1220e7ec3aba778be566999fab8fcf3ce790468910d22910f14589646ad"} Feb 21 00:10:09 crc kubenswrapper[4730]: I0221 00:10:09.362576 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr5rw" event={"ID":"406888dc-7d00-47e2-8c63-05e0106525e1","Type":"ContainerStarted","Data":"1f7d81ec102b99f5fac1bb8182fb6aed2ad16db2cceb57c27c6c17ddb57c4e44"} Feb 21 00:10:10 crc kubenswrapper[4730]: I0221 00:10:10.374469 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e062c181-c9d3-4cee-8284-27620adeafb0","Type":"ContainerStarted","Data":"764144ab43620b20a91df266d1d0ca8456646cdd9dfea0751a905ad4a574c9f7"} Feb 21 00:10:10 crc kubenswrapper[4730]: I0221 00:10:10.379041 4730 generic.go:334] "Generic (PLEG): container finished" podID="406888dc-7d00-47e2-8c63-05e0106525e1" containerID="1f7d81ec102b99f5fac1bb8182fb6aed2ad16db2cceb57c27c6c17ddb57c4e44" exitCode=0 Feb 21 00:10:10 crc kubenswrapper[4730]: I0221 00:10:10.379109 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr5rw" event={"ID":"406888dc-7d00-47e2-8c63-05e0106525e1","Type":"ContainerDied","Data":"1f7d81ec102b99f5fac1bb8182fb6aed2ad16db2cceb57c27c6c17ddb57c4e44"} Feb 21 00:10:10 crc kubenswrapper[4730]: I0221 00:10:10.379150 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr5rw" event={"ID":"406888dc-7d00-47e2-8c63-05e0106525e1","Type":"ContainerStarted","Data":"53754d8c17042b805d7249144197106d8b64b93e85e4ff2976778ff80cb11ce2"} Feb 21 00:10:10 crc kubenswrapper[4730]: I0221 00:10:10.456527 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.456500917 podStartE2EDuration="2.456500917s" podCreationTimestamp="2026-02-21 00:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:10:10.431491529 +0000 UTC m=+212.443058464" watchObservedRunningTime="2026-02-21 00:10:10.456500917 +0000 UTC m=+212.468067852" Feb 21 00:10:10 crc kubenswrapper[4730]: I0221 00:10:10.457186 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zr5rw" podStartSLOduration=3.139284683 podStartE2EDuration="53.457181461s" podCreationTimestamp="2026-02-21 00:09:17 +0000 UTC" firstStartedPulling="2026-02-21 00:09:19.428308356 +0000 UTC m=+161.439875301" lastFinishedPulling="2026-02-21 00:10:09.746205114 +0000 UTC m=+211.757772079" observedRunningTime="2026-02-21 00:10:10.455750842 +0000 UTC m=+212.467317777" watchObservedRunningTime="2026-02-21 00:10:10.457181461 +0000 UTC m=+212.468748396" Feb 21 00:10:10 crc kubenswrapper[4730]: I0221 00:10:10.900147 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sgmxz"] Feb 21 00:10:10 crc kubenswrapper[4730]: I0221 00:10:10.900918 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sgmxz" podUID="6db4a3c0-933e-4bac-b03c-46e8631ab467" containerName="registry-server" containerID="cri-o://cfcfea6af6d72c0daa73ca19af91e978366c96fb7bfbd4f2676c8966fc96447e" gracePeriod=2 Feb 21 00:10:11 crc kubenswrapper[4730]: I0221 00:10:11.118102 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:10:11 crc kubenswrapper[4730]: I0221 00:10:11.155790 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:10:11 crc kubenswrapper[4730]: I0221 00:10:11.390179 4730 generic.go:334] "Generic (PLEG): container finished" podID="6db4a3c0-933e-4bac-b03c-46e8631ab467" containerID="cfcfea6af6d72c0daa73ca19af91e978366c96fb7bfbd4f2676c8966fc96447e" exitCode=0 Feb 21 00:10:11 crc kubenswrapper[4730]: I0221 00:10:11.390289 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgmxz" event={"ID":"6db4a3c0-933e-4bac-b03c-46e8631ab467","Type":"ContainerDied","Data":"cfcfea6af6d72c0daa73ca19af91e978366c96fb7bfbd4f2676c8966fc96447e"} Feb 21 00:10:11 crc kubenswrapper[4730]: I0221 00:10:11.531844 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:10:11 crc kubenswrapper[4730]: I0221 00:10:11.586892 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:10:11 crc kubenswrapper[4730]: I0221 00:10:11.967833 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.031359 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db4a3c0-933e-4bac-b03c-46e8631ab467-catalog-content\") pod \"6db4a3c0-933e-4bac-b03c-46e8631ab467\" (UID: \"6db4a3c0-933e-4bac-b03c-46e8631ab467\") " Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.031400 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db4a3c0-933e-4bac-b03c-46e8631ab467-utilities\") pod \"6db4a3c0-933e-4bac-b03c-46e8631ab467\" (UID: \"6db4a3c0-933e-4bac-b03c-46e8631ab467\") " Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.031519 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8gw9\" (UniqueName: \"kubernetes.io/projected/6db4a3c0-933e-4bac-b03c-46e8631ab467-kube-api-access-g8gw9\") pod \"6db4a3c0-933e-4bac-b03c-46e8631ab467\" (UID: \"6db4a3c0-933e-4bac-b03c-46e8631ab467\") " Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.032556 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db4a3c0-933e-4bac-b03c-46e8631ab467-utilities" (OuterVolumeSpecName: "utilities") pod "6db4a3c0-933e-4bac-b03c-46e8631ab467" (UID: "6db4a3c0-933e-4bac-b03c-46e8631ab467"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.039248 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db4a3c0-933e-4bac-b03c-46e8631ab467-kube-api-access-g8gw9" (OuterVolumeSpecName: "kube-api-access-g8gw9") pod "6db4a3c0-933e-4bac-b03c-46e8631ab467" (UID: "6db4a3c0-933e-4bac-b03c-46e8631ab467"). InnerVolumeSpecName "kube-api-access-g8gw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.083745 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db4a3c0-933e-4bac-b03c-46e8631ab467-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6db4a3c0-933e-4bac-b03c-46e8631ab467" (UID: "6db4a3c0-933e-4bac-b03c-46e8631ab467"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.134432 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8gw9\" (UniqueName: \"kubernetes.io/projected/6db4a3c0-933e-4bac-b03c-46e8631ab467-kube-api-access-g8gw9\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.134476 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db4a3c0-933e-4bac-b03c-46e8631ab467-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.134495 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db4a3c0-933e-4bac-b03c-46e8631ab467-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.400110 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgmxz" event={"ID":"6db4a3c0-933e-4bac-b03c-46e8631ab467","Type":"ContainerDied","Data":"f5fbca68fc553b9e1650d63e20346a01836e5ce7189bb1d205001aca5bc03c79"} Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.400166 4730 scope.go:117] "RemoveContainer" containerID="cfcfea6af6d72c0daa73ca19af91e978366c96fb7bfbd4f2676c8966fc96447e" Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.400183 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgmxz" Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.428065 4730 scope.go:117] "RemoveContainer" containerID="74dee4756b7d0bb5a78b82fb273d749012b5b1aa0ab71b6e6b611634110c29b8" Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.443533 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sgmxz"] Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.444679 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sgmxz"] Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.462780 4730 scope.go:117] "RemoveContainer" containerID="78a09d4f840e3a929f56b8121ee0145bd33214c297e8d2cc9016296611628f3f" Feb 21 00:10:12 crc kubenswrapper[4730]: I0221 00:10:12.705546 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db4a3c0-933e-4bac-b03c-46e8631ab467" path="/var/lib/kubelet/pods/6db4a3c0-933e-4bac-b03c-46e8631ab467/volumes" Feb 21 00:10:15 crc kubenswrapper[4730]: I0221 00:10:15.106562 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjcl9"] Feb 21 00:10:15 crc kubenswrapper[4730]: I0221 00:10:15.107367 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rjcl9" podUID="e8b8d7fe-e435-40cd-80fd-3595610bca8f" containerName="registry-server" containerID="cri-o://05dadea4f3707fbf61f5a5b11e649f3a57cef760fa1f907355e71f5538cf8698" gracePeriod=2 Feb 21 00:10:15 crc kubenswrapper[4730]: I0221 00:10:15.429273 4730 generic.go:334] "Generic (PLEG): container finished" podID="e8b8d7fe-e435-40cd-80fd-3595610bca8f" containerID="05dadea4f3707fbf61f5a5b11e649f3a57cef760fa1f907355e71f5538cf8698" exitCode=0 Feb 21 00:10:15 crc kubenswrapper[4730]: I0221 00:10:15.429324 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjcl9" event={"ID":"e8b8d7fe-e435-40cd-80fd-3595610bca8f","Type":"ContainerDied","Data":"05dadea4f3707fbf61f5a5b11e649f3a57cef760fa1f907355e71f5538cf8698"} Feb 21 00:10:15 crc kubenswrapper[4730]: I0221 00:10:15.811575 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cd5984f6b-v9n8n"] Feb 21 00:10:15 crc kubenswrapper[4730]: I0221 00:10:15.811889 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" podUID="48c30298-f580-4266-bc17-b6285bf37df6" containerName="controller-manager" containerID="cri-o://425d2e2e7e8d5f054bc3380b929922e232f22a9391baffb27adcdc401b3af00d" gracePeriod=30 Feb 21 00:10:15 crc kubenswrapper[4730]: I0221 00:10:15.829620 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t"] Feb 21 00:10:15 crc kubenswrapper[4730]: I0221 00:10:15.829886 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" podUID="44b93af8-6feb-4c92-bf65-7a53bd3baac8" containerName="route-controller-manager" containerID="cri-o://141945e352de594bf3f76de73cfdbe4158bd65f375e20d134853f936ff8947f4" gracePeriod=30 Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.199699 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.310396 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b8d7fe-e435-40cd-80fd-3595610bca8f-utilities\") pod \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\" (UID: \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\") " Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.310548 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9pdb\" (UniqueName: \"kubernetes.io/projected/e8b8d7fe-e435-40cd-80fd-3595610bca8f-kube-api-access-t9pdb\") pod \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\" (UID: \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\") " Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.310615 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b8d7fe-e435-40cd-80fd-3595610bca8f-catalog-content\") pod \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\" (UID: \"e8b8d7fe-e435-40cd-80fd-3595610bca8f\") " Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.312564 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8b8d7fe-e435-40cd-80fd-3595610bca8f-utilities" (OuterVolumeSpecName: "utilities") pod "e8b8d7fe-e435-40cd-80fd-3595610bca8f" (UID: "e8b8d7fe-e435-40cd-80fd-3595610bca8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.319163 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b8d7fe-e435-40cd-80fd-3595610bca8f-kube-api-access-t9pdb" (OuterVolumeSpecName: "kube-api-access-t9pdb") pod "e8b8d7fe-e435-40cd-80fd-3595610bca8f" (UID: "e8b8d7fe-e435-40cd-80fd-3595610bca8f"). InnerVolumeSpecName "kube-api-access-t9pdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.411963 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9pdb\" (UniqueName: \"kubernetes.io/projected/e8b8d7fe-e435-40cd-80fd-3595610bca8f-kube-api-access-t9pdb\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.412003 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b8d7fe-e435-40cd-80fd-3595610bca8f-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.436792 4730 generic.go:334] "Generic (PLEG): container finished" podID="48c30298-f580-4266-bc17-b6285bf37df6" containerID="425d2e2e7e8d5f054bc3380b929922e232f22a9391baffb27adcdc401b3af00d" exitCode=0 Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.436876 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" event={"ID":"48c30298-f580-4266-bc17-b6285bf37df6","Type":"ContainerDied","Data":"425d2e2e7e8d5f054bc3380b929922e232f22a9391baffb27adcdc401b3af00d"} Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.438645 4730 generic.go:334] "Generic (PLEG): container finished" podID="44b93af8-6feb-4c92-bf65-7a53bd3baac8" containerID="141945e352de594bf3f76de73cfdbe4158bd65f375e20d134853f936ff8947f4" exitCode=0 Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.438725 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" event={"ID":"44b93af8-6feb-4c92-bf65-7a53bd3baac8","Type":"ContainerDied","Data":"141945e352de594bf3f76de73cfdbe4158bd65f375e20d134853f936ff8947f4"} Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.438754 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" event={"ID":"44b93af8-6feb-4c92-bf65-7a53bd3baac8","Type":"ContainerDied","Data":"90e217f9bbe1df5a24568ca188f6f665bd98ec5864e7d94001f47f9acd82e7b8"} Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.438768 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90e217f9bbe1df5a24568ca188f6f665bd98ec5864e7d94001f47f9acd82e7b8" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.441035 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjcl9" event={"ID":"e8b8d7fe-e435-40cd-80fd-3595610bca8f","Type":"ContainerDied","Data":"ad78a3a6206c063a00aca183f785c867f2b99057e6331ffe84d15a48a390b991"} Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.441079 4730 scope.go:117] "RemoveContainer" containerID="05dadea4f3707fbf61f5a5b11e649f3a57cef760fa1f907355e71f5538cf8698" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.441171 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjcl9" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.446607 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.477025 4730 scope.go:117] "RemoveContainer" containerID="c130c161581523e1ff124ccf286979b7bfc3020d2dff4f007ec0c5426297783e" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.480290 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8b8d7fe-e435-40cd-80fd-3595610bca8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8b8d7fe-e435-40cd-80fd-3595610bca8f" (UID: "e8b8d7fe-e435-40cd-80fd-3595610bca8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.512872 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sp5q\" (UniqueName: \"kubernetes.io/projected/44b93af8-6feb-4c92-bf65-7a53bd3baac8-kube-api-access-7sp5q\") pod \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.512959 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44b93af8-6feb-4c92-bf65-7a53bd3baac8-config\") pod \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.513024 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44b93af8-6feb-4c92-bf65-7a53bd3baac8-client-ca\") pod \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.513050 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44b93af8-6feb-4c92-bf65-7a53bd3baac8-serving-cert\") pod \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\" (UID: \"44b93af8-6feb-4c92-bf65-7a53bd3baac8\") " Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.513250 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b8d7fe-e435-40cd-80fd-3595610bca8f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.516220 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44b93af8-6feb-4c92-bf65-7a53bd3baac8-client-ca" (OuterVolumeSpecName: "client-ca") pod "44b93af8-6feb-4c92-bf65-7a53bd3baac8" (UID: "44b93af8-6feb-4c92-bf65-7a53bd3baac8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.516303 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44b93af8-6feb-4c92-bf65-7a53bd3baac8-config" (OuterVolumeSpecName: "config") pod "44b93af8-6feb-4c92-bf65-7a53bd3baac8" (UID: "44b93af8-6feb-4c92-bf65-7a53bd3baac8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.516361 4730 scope.go:117] "RemoveContainer" containerID="3740cb9240695745a6b30e26891d71fab50fb0d367ca3847b4f684afe7d7b656" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.521721 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44b93af8-6feb-4c92-bf65-7a53bd3baac8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "44b93af8-6feb-4c92-bf65-7a53bd3baac8" (UID: "44b93af8-6feb-4c92-bf65-7a53bd3baac8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.521759 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44b93af8-6feb-4c92-bf65-7a53bd3baac8-kube-api-access-7sp5q" (OuterVolumeSpecName: "kube-api-access-7sp5q") pod "44b93af8-6feb-4c92-bf65-7a53bd3baac8" (UID: "44b93af8-6feb-4c92-bf65-7a53bd3baac8"). InnerVolumeSpecName "kube-api-access-7sp5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.538060 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.614374 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rnxw\" (UniqueName: \"kubernetes.io/projected/48c30298-f580-4266-bc17-b6285bf37df6-kube-api-access-6rnxw\") pod \"48c30298-f580-4266-bc17-b6285bf37df6\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.614509 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-client-ca\") pod \"48c30298-f580-4266-bc17-b6285bf37df6\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.614581 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-proxy-ca-bundles\") pod \"48c30298-f580-4266-bc17-b6285bf37df6\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.614611 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-config\") pod \"48c30298-f580-4266-bc17-b6285bf37df6\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.614635 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48c30298-f580-4266-bc17-b6285bf37df6-serving-cert\") pod \"48c30298-f580-4266-bc17-b6285bf37df6\" (UID: \"48c30298-f580-4266-bc17-b6285bf37df6\") " Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.614869 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44b93af8-6feb-4c92-bf65-7a53bd3baac8-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.614891 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44b93af8-6feb-4c92-bf65-7a53bd3baac8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.614903 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44b93af8-6feb-4c92-bf65-7a53bd3baac8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.614914 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sp5q\" (UniqueName: \"kubernetes.io/projected/44b93af8-6feb-4c92-bf65-7a53bd3baac8-kube-api-access-7sp5q\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.615173 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "48c30298-f580-4266-bc17-b6285bf37df6" (UID: "48c30298-f580-4266-bc17-b6285bf37df6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.615207 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-client-ca" (OuterVolumeSpecName: "client-ca") pod "48c30298-f580-4266-bc17-b6285bf37df6" (UID: "48c30298-f580-4266-bc17-b6285bf37df6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.615755 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-config" (OuterVolumeSpecName: "config") pod "48c30298-f580-4266-bc17-b6285bf37df6" (UID: "48c30298-f580-4266-bc17-b6285bf37df6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.617714 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c30298-f580-4266-bc17-b6285bf37df6-kube-api-access-6rnxw" (OuterVolumeSpecName: "kube-api-access-6rnxw") pod "48c30298-f580-4266-bc17-b6285bf37df6" (UID: "48c30298-f580-4266-bc17-b6285bf37df6"). InnerVolumeSpecName "kube-api-access-6rnxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.617732 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c30298-f580-4266-bc17-b6285bf37df6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "48c30298-f580-4266-bc17-b6285bf37df6" (UID: "48c30298-f580-4266-bc17-b6285bf37df6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.715729 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.715789 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.715817 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48c30298-f580-4266-bc17-b6285bf37df6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.715843 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rnxw\" (UniqueName: \"kubernetes.io/projected/48c30298-f580-4266-bc17-b6285bf37df6-kube-api-access-6rnxw\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.715870 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48c30298-f580-4266-bc17-b6285bf37df6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.761110 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjcl9"] Feb 21 00:10:16 crc kubenswrapper[4730]: I0221 00:10:16.763249 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rjcl9"] Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.212533 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2"] Feb 21 00:10:17 crc kubenswrapper[4730]: E0221 00:10:17.212893 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db4a3c0-933e-4bac-b03c-46e8631ab467" containerName="registry-server" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.212913 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db4a3c0-933e-4bac-b03c-46e8631ab467" containerName="registry-server" Feb 21 00:10:17 crc kubenswrapper[4730]: E0221 00:10:17.212934 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b8d7fe-e435-40cd-80fd-3595610bca8f" containerName="extract-utilities" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.212980 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b8d7fe-e435-40cd-80fd-3595610bca8f" containerName="extract-utilities" Feb 21 00:10:17 crc kubenswrapper[4730]: E0221 00:10:17.213032 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b8d7fe-e435-40cd-80fd-3595610bca8f" containerName="extract-content" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.213047 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b8d7fe-e435-40cd-80fd-3595610bca8f" containerName="extract-content" Feb 21 00:10:17 crc kubenswrapper[4730]: E0221 00:10:17.213065 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c30298-f580-4266-bc17-b6285bf37df6" containerName="controller-manager" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.213077 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c30298-f580-4266-bc17-b6285bf37df6" containerName="controller-manager" Feb 21 00:10:17 crc kubenswrapper[4730]: E0221 00:10:17.213094 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b8d7fe-e435-40cd-80fd-3595610bca8f" containerName="registry-server" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.213106 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b8d7fe-e435-40cd-80fd-3595610bca8f" containerName="registry-server" Feb 21 00:10:17 crc kubenswrapper[4730]: E0221 00:10:17.213136 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db4a3c0-933e-4bac-b03c-46e8631ab467" containerName="extract-content" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.213180 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db4a3c0-933e-4bac-b03c-46e8631ab467" containerName="extract-content" Feb 21 00:10:17 crc kubenswrapper[4730]: E0221 00:10:17.213201 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db4a3c0-933e-4bac-b03c-46e8631ab467" containerName="extract-utilities" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.213214 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db4a3c0-933e-4bac-b03c-46e8631ab467" containerName="extract-utilities" Feb 21 00:10:17 crc kubenswrapper[4730]: E0221 00:10:17.213233 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44b93af8-6feb-4c92-bf65-7a53bd3baac8" containerName="route-controller-manager" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.213245 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b93af8-6feb-4c92-bf65-7a53bd3baac8" containerName="route-controller-manager" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.213441 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b8d7fe-e435-40cd-80fd-3595610bca8f" containerName="registry-server" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.213469 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c30298-f580-4266-bc17-b6285bf37df6" containerName="controller-manager" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.213488 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="44b93af8-6feb-4c92-bf65-7a53bd3baac8" containerName="route-controller-manager" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.213502 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db4a3c0-933e-4bac-b03c-46e8631ab467" containerName="registry-server" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.214161 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.218118 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6"] Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.219245 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.237833 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2"] Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.250230 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6"] Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.321989 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-config\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.322076 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-config\") pod \"route-controller-manager-6c6f598978-m97q2\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.322114 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2cd1a24-92ab-4849-b4f5-a836cda6da06-serving-cert\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.322134 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-client-ca\") pod \"route-controller-manager-6c6f598978-m97q2\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.322176 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqmp5\" (UniqueName: \"kubernetes.io/projected/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-kube-api-access-fqmp5\") pod \"route-controller-manager-6c6f598978-m97q2\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.322292 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-client-ca\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.322371 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-proxy-ca-bundles\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.322403 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5x7g\" (UniqueName: \"kubernetes.io/projected/b2cd1a24-92ab-4849-b4f5-a836cda6da06-kube-api-access-p5x7g\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.322438 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-serving-cert\") pod \"route-controller-manager-6c6f598978-m97q2\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.423091 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5x7g\" (UniqueName: \"kubernetes.io/projected/b2cd1a24-92ab-4849-b4f5-a836cda6da06-kube-api-access-p5x7g\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.423135 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-serving-cert\") pod \"route-controller-manager-6c6f598978-m97q2\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.423176 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-config\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.423196 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-config\") pod \"route-controller-manager-6c6f598978-m97q2\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.423217 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-client-ca\") pod \"route-controller-manager-6c6f598978-m97q2\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.423232 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2cd1a24-92ab-4849-b4f5-a836cda6da06-serving-cert\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.423258 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqmp5\" (UniqueName: \"kubernetes.io/projected/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-kube-api-access-fqmp5\") pod \"route-controller-manager-6c6f598978-m97q2\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.423282 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-client-ca\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.423306 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-proxy-ca-bundles\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.424429 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-proxy-ca-bundles\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.426075 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-client-ca\") pod \"route-controller-manager-6c6f598978-m97q2\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.426859 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-config\") pod \"route-controller-manager-6c6f598978-m97q2\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.426896 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-client-ca\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.428102 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-config\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.430017 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2cd1a24-92ab-4849-b4f5-a836cda6da06-serving-cert\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.431558 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-serving-cert\") pod \"route-controller-manager-6c6f598978-m97q2\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.448642 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5x7g\" (UniqueName: \"kubernetes.io/projected/b2cd1a24-92ab-4849-b4f5-a836cda6da06-kube-api-access-p5x7g\") pod \"controller-manager-5fc67c75cf-mzrt6\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.449081 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.449109 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.449116 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd5984f6b-v9n8n" event={"ID":"48c30298-f580-4266-bc17-b6285bf37df6","Type":"ContainerDied","Data":"3cf699d6385cf5736dd8913ed3729b5acbdadc91eece17042e350dcec2323bb4"} Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.449170 4730 scope.go:117] "RemoveContainer" containerID="425d2e2e7e8d5f054bc3380b929922e232f22a9391baffb27adcdc401b3af00d" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.452215 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqmp5\" (UniqueName: \"kubernetes.io/projected/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-kube-api-access-fqmp5\") pod \"route-controller-manager-6c6f598978-m97q2\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.501227 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cd5984f6b-v9n8n"] Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.506995 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cd5984f6b-v9n8n"] Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.514588 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t"] Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.521661 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-757bd57dc4-ldw9t"] Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.550073 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" podUID="75d6599f-a7ce-4d05-a452-ebcb1a1fea65" containerName="oauth-openshift" containerID="cri-o://98579cef4cd8f4a928a87a2f3c8420c484ce4f500c58e48a744a1a0f1129e845" gracePeriod=15 Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.571745 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.579679 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.826581 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.826923 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.864634 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:10:17 crc kubenswrapper[4730]: I0221 00:10:17.941011 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.006443 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6"] Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.030679 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-router-certs\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.031000 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-idp-0-file-data\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.031116 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-audit-dir\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.031237 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-provider-selection\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.031350 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-error\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.031456 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-session\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.031971 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-serving-cert\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.032063 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-service-ca\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.032131 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-cliconfig\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.032213 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-ocp-branding-template\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.032299 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-login\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.032395 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-audit-policies\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.032502 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwbnr\" (UniqueName: \"kubernetes.io/projected/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-kube-api-access-zwbnr\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.032617 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-trusted-ca-bundle\") pod \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\" (UID: \"75d6599f-a7ce-4d05-a452-ebcb1a1fea65\") " Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.031718 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.033077 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.033572 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.033741 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.033762 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.037407 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.038008 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.038293 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.038612 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.038831 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.039021 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.039039 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-kube-api-access-zwbnr" (OuterVolumeSpecName: "kube-api-access-zwbnr") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "kube-api-access-zwbnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.039141 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.039676 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "75d6599f-a7ce-4d05-a452-ebcb1a1fea65" (UID: "75d6599f-a7ce-4d05-a452-ebcb1a1fea65"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.051593 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2"] Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134489 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134747 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134762 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134776 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134789 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134822 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134838 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134853 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134867 4730 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134879 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwbnr\" (UniqueName: \"kubernetes.io/projected/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-kube-api-access-zwbnr\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134892 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134906 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134920 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.134932 4730 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/75d6599f-a7ce-4d05-a452-ebcb1a1fea65-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.368895 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.407576 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.454413 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" event={"ID":"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b","Type":"ContainerStarted","Data":"015f1751189aed259578e594aa5e7f8bfc5ec74a1e5d6473f83cce63f94aa01d"} Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.454455 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" event={"ID":"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b","Type":"ContainerStarted","Data":"6acd56af4ca0fe88942cac1fee31192cd087d8edc0916b60790909af4e8e6962"} Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.454667 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.456278 4730 generic.go:334] "Generic (PLEG): container finished" podID="75d6599f-a7ce-4d05-a452-ebcb1a1fea65" containerID="98579cef4cd8f4a928a87a2f3c8420c484ce4f500c58e48a744a1a0f1129e845" exitCode=0 Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.456373 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" event={"ID":"75d6599f-a7ce-4d05-a452-ebcb1a1fea65","Type":"ContainerDied","Data":"98579cef4cd8f4a928a87a2f3c8420c484ce4f500c58e48a744a1a0f1129e845"} Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.456782 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" event={"ID":"75d6599f-a7ce-4d05-a452-ebcb1a1fea65","Type":"ContainerDied","Data":"4b7b5832c044db8e48f13bf3d9de9c544e6e9294b882529c8c1e43e45fe42e3a"} Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.456810 4730 scope.go:117] "RemoveContainer" containerID="98579cef4cd8f4a928a87a2f3c8420c484ce4f500c58e48a744a1a0f1129e845" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.456971 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c2cjf" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.471527 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" event={"ID":"b2cd1a24-92ab-4849-b4f5-a836cda6da06","Type":"ContainerStarted","Data":"60c6e5831c03ffb599abf49b7e95120524fd9b811d5ea9d261478593918773a1"} Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.471572 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" event={"ID":"b2cd1a24-92ab-4849-b4f5-a836cda6da06","Type":"ContainerStarted","Data":"e6ee068e657421824f9c423af003732fc7cbde9e28ccf715e56e2f7264aca989"} Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.480540 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" podStartSLOduration=3.480527066 podStartE2EDuration="3.480527066s" podCreationTimestamp="2026-02-21 00:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:10:18.47974171 +0000 UTC m=+220.491308645" watchObservedRunningTime="2026-02-21 00:10:18.480527066 +0000 UTC m=+220.492094001" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.496965 4730 scope.go:117] "RemoveContainer" containerID="98579cef4cd8f4a928a87a2f3c8420c484ce4f500c58e48a744a1a0f1129e845" Feb 21 00:10:18 crc kubenswrapper[4730]: E0221 00:10:18.497424 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98579cef4cd8f4a928a87a2f3c8420c484ce4f500c58e48a744a1a0f1129e845\": container with ID starting with 98579cef4cd8f4a928a87a2f3c8420c484ce4f500c58e48a744a1a0f1129e845 not found: ID does not exist" containerID="98579cef4cd8f4a928a87a2f3c8420c484ce4f500c58e48a744a1a0f1129e845" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.497455 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98579cef4cd8f4a928a87a2f3c8420c484ce4f500c58e48a744a1a0f1129e845"} err="failed to get container status \"98579cef4cd8f4a928a87a2f3c8420c484ce4f500c58e48a744a1a0f1129e845\": rpc error: code = NotFound desc = could not find container \"98579cef4cd8f4a928a87a2f3c8420c484ce4f500c58e48a744a1a0f1129e845\": container with ID starting with 98579cef4cd8f4a928a87a2f3c8420c484ce4f500c58e48a744a1a0f1129e845 not found: ID does not exist" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.500422 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c2cjf"] Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.503563 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c2cjf"] Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.510707 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" podStartSLOduration=3.510683319 podStartE2EDuration="3.510683319s" podCreationTimestamp="2026-02-21 00:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:10:18.508444573 +0000 UTC m=+220.520011518" watchObservedRunningTime="2026-02-21 00:10:18.510683319 +0000 UTC m=+220.522250254" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.518223 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.627346 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.700712 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44b93af8-6feb-4c92-bf65-7a53bd3baac8" path="/var/lib/kubelet/pods/44b93af8-6feb-4c92-bf65-7a53bd3baac8/volumes" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.701577 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48c30298-f580-4266-bc17-b6285bf37df6" path="/var/lib/kubelet/pods/48c30298-f580-4266-bc17-b6285bf37df6/volumes" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.702097 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d6599f-a7ce-4d05-a452-ebcb1a1fea65" path="/var/lib/kubelet/pods/75d6599f-a7ce-4d05-a452-ebcb1a1fea65/volumes" Feb 21 00:10:18 crc kubenswrapper[4730]: I0221 00:10:18.703214 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b8d7fe-e435-40cd-80fd-3595610bca8f" path="/var/lib/kubelet/pods/e8b8d7fe-e435-40cd-80fd-3595610bca8f/volumes" Feb 21 00:10:19 crc kubenswrapper[4730]: I0221 00:10:19.482387 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:19 crc kubenswrapper[4730]: I0221 00:10:19.486881 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.099421 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjwnr"] Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.099661 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kjwnr" podUID="782ae357-da25-45f3-a274-048aa2ffdbf0" containerName="registry-server" containerID="cri-o://bc723a40b15b84d282f133293f96edfa2ea2e4dd560b131ec86b5a55b3235f56" gracePeriod=2 Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.488438 4730 generic.go:334] "Generic (PLEG): container finished" podID="782ae357-da25-45f3-a274-048aa2ffdbf0" containerID="bc723a40b15b84d282f133293f96edfa2ea2e4dd560b131ec86b5a55b3235f56" exitCode=0 Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.488641 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjwnr" event={"ID":"782ae357-da25-45f3-a274-048aa2ffdbf0","Type":"ContainerDied","Data":"bc723a40b15b84d282f133293f96edfa2ea2e4dd560b131ec86b5a55b3235f56"} Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.522882 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.566338 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782ae357-da25-45f3-a274-048aa2ffdbf0-utilities\") pod \"782ae357-da25-45f3-a274-048aa2ffdbf0\" (UID: \"782ae357-da25-45f3-a274-048aa2ffdbf0\") " Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.566441 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782ae357-da25-45f3-a274-048aa2ffdbf0-catalog-content\") pod \"782ae357-da25-45f3-a274-048aa2ffdbf0\" (UID: \"782ae357-da25-45f3-a274-048aa2ffdbf0\") " Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.566464 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb96x\" (UniqueName: \"kubernetes.io/projected/782ae357-da25-45f3-a274-048aa2ffdbf0-kube-api-access-nb96x\") pod \"782ae357-da25-45f3-a274-048aa2ffdbf0\" (UID: \"782ae357-da25-45f3-a274-048aa2ffdbf0\") " Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.567047 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782ae357-da25-45f3-a274-048aa2ffdbf0-utilities" (OuterVolumeSpecName: "utilities") pod "782ae357-da25-45f3-a274-048aa2ffdbf0" (UID: "782ae357-da25-45f3-a274-048aa2ffdbf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.576430 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782ae357-da25-45f3-a274-048aa2ffdbf0-kube-api-access-nb96x" (OuterVolumeSpecName: "kube-api-access-nb96x") pod "782ae357-da25-45f3-a274-048aa2ffdbf0" (UID: "782ae357-da25-45f3-a274-048aa2ffdbf0"). InnerVolumeSpecName "kube-api-access-nb96x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.617192 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782ae357-da25-45f3-a274-048aa2ffdbf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "782ae357-da25-45f3-a274-048aa2ffdbf0" (UID: "782ae357-da25-45f3-a274-048aa2ffdbf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.667756 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782ae357-da25-45f3-a274-048aa2ffdbf0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.667793 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb96x\" (UniqueName: \"kubernetes.io/projected/782ae357-da25-45f3-a274-048aa2ffdbf0-kube-api-access-nb96x\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:20 crc kubenswrapper[4730]: I0221 00:10:20.667805 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782ae357-da25-45f3-a274-048aa2ffdbf0-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:21 crc kubenswrapper[4730]: I0221 00:10:21.495573 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjwnr" event={"ID":"782ae357-da25-45f3-a274-048aa2ffdbf0","Type":"ContainerDied","Data":"a78def995ce6ff2e3a981e0cbd55bf071065a1788295e4e72cdff8d0b498479b"} Feb 21 00:10:21 crc kubenswrapper[4730]: I0221 00:10:21.495624 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjwnr" Feb 21 00:10:21 crc kubenswrapper[4730]: I0221 00:10:21.495664 4730 scope.go:117] "RemoveContainer" containerID="bc723a40b15b84d282f133293f96edfa2ea2e4dd560b131ec86b5a55b3235f56" Feb 21 00:10:21 crc kubenswrapper[4730]: I0221 00:10:21.515354 4730 scope.go:117] "RemoveContainer" containerID="d690f17dd5abfcac18ade2e828de6cfc98fd6d67a6b6f6a0660641dcf4f1e188" Feb 21 00:10:21 crc kubenswrapper[4730]: I0221 00:10:21.517904 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjwnr"] Feb 21 00:10:21 crc kubenswrapper[4730]: I0221 00:10:21.522002 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kjwnr"] Feb 21 00:10:21 crc kubenswrapper[4730]: I0221 00:10:21.534113 4730 scope.go:117] "RemoveContainer" containerID="66ca9990ec8b62a73f29ca30a78f9f8790d3fe11d18d6d493374e5172663c54c" Feb 21 00:10:22 crc kubenswrapper[4730]: I0221 00:10:22.701512 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782ae357-da25-45f3-a274-048aa2ffdbf0" path="/var/lib/kubelet/pods/782ae357-da25-45f3-a274-048aa2ffdbf0/volumes" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.215396 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf"] Feb 21 00:10:23 crc kubenswrapper[4730]: E0221 00:10:23.215623 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782ae357-da25-45f3-a274-048aa2ffdbf0" containerName="registry-server" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.215640 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="782ae357-da25-45f3-a274-048aa2ffdbf0" containerName="registry-server" Feb 21 00:10:23 crc kubenswrapper[4730]: E0221 00:10:23.215655 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782ae357-da25-45f3-a274-048aa2ffdbf0" containerName="extract-utilities" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.215664 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="782ae357-da25-45f3-a274-048aa2ffdbf0" containerName="extract-utilities" Feb 21 00:10:23 crc kubenswrapper[4730]: E0221 00:10:23.215685 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782ae357-da25-45f3-a274-048aa2ffdbf0" containerName="extract-content" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.215693 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="782ae357-da25-45f3-a274-048aa2ffdbf0" containerName="extract-content" Feb 21 00:10:23 crc kubenswrapper[4730]: E0221 00:10:23.215705 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d6599f-a7ce-4d05-a452-ebcb1a1fea65" containerName="oauth-openshift" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.215713 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d6599f-a7ce-4d05-a452-ebcb1a1fea65" containerName="oauth-openshift" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.215822 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="782ae357-da25-45f3-a274-048aa2ffdbf0" containerName="registry-server" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.215846 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d6599f-a7ce-4d05-a452-ebcb1a1fea65" containerName="oauth-openshift" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.216289 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.219189 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.222558 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.227176 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.227287 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.227291 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.229560 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.229571 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.229676 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.230032 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.230045 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.230154 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.230577 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.237499 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.237894 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf"] Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.246202 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.248452 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.298373 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.298426 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcq97\" (UniqueName: \"kubernetes.io/projected/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-kube-api-access-mcq97\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.298456 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.298522 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.300181 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.300410 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.300503 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-audit-dir\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.300608 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.300656 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.300724 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-audit-policies\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.300772 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-session\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.300806 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.300856 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.300908 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402232 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402311 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcq97\" (UniqueName: \"kubernetes.io/projected/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-kube-api-access-mcq97\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402340 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402362 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402380 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402404 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402430 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-audit-dir\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402459 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402476 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402497 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-audit-policies\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402522 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402540 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-session\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402557 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.402572 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.403408 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.403494 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.403883 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.404121 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-audit-policies\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.404164 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-audit-dir\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.406463 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.406503 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-session\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.406778 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.407195 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.408428 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.409435 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.411736 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.413268 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.423001 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcq97\" (UniqueName: \"kubernetes.io/projected/0ba78b7a-54fb-40b4-84b1-6e42d7b02c03-kube-api-access-mcq97\") pod \"oauth-openshift-5d4b6f47b4-qbklf\" (UID: \"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.543599 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:23 crc kubenswrapper[4730]: I0221 00:10:23.996759 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf"] Feb 21 00:10:24 crc kubenswrapper[4730]: I0221 00:10:24.323719 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:10:24 crc kubenswrapper[4730]: I0221 00:10:24.324083 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:10:24 crc kubenswrapper[4730]: I0221 00:10:24.324138 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:10:24 crc kubenswrapper[4730]: I0221 00:10:24.324664 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477"} pod="openshift-machine-config-operator/machine-config-daemon-plgd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 00:10:24 crc kubenswrapper[4730]: I0221 00:10:24.324736 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" containerID="cri-o://6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477" gracePeriod=600 Feb 21 00:10:24 crc kubenswrapper[4730]: I0221 00:10:24.522348 4730 generic.go:334] "Generic (PLEG): container finished" podID="7622a560-9120-4202-b95a-246a806fe889" containerID="6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477" exitCode=0 Feb 21 00:10:24 crc kubenswrapper[4730]: I0221 00:10:24.522405 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerDied","Data":"6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477"} Feb 21 00:10:24 crc kubenswrapper[4730]: I0221 00:10:24.524200 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" event={"ID":"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03","Type":"ContainerStarted","Data":"3dfd061a46ffd138a538dbde1b8bee6abd48afab293f6f9985e95f15c88ce5ae"} Feb 21 00:10:24 crc kubenswrapper[4730]: I0221 00:10:24.524224 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" event={"ID":"0ba78b7a-54fb-40b4-84b1-6e42d7b02c03","Type":"ContainerStarted","Data":"078021ae0a906d346f74798675526abb3383fb276c0cfbbb5746bfb7871c0796"} Feb 21 00:10:24 crc kubenswrapper[4730]: I0221 00:10:24.526627 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:24 crc kubenswrapper[4730]: I0221 00:10:24.555217 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" podStartSLOduration=32.555194511 podStartE2EDuration="32.555194511s" podCreationTimestamp="2026-02-21 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:10:24.55072954 +0000 UTC m=+226.562296515" watchObservedRunningTime="2026-02-21 00:10:24.555194511 +0000 UTC m=+226.566761476" Feb 21 00:10:24 crc kubenswrapper[4730]: I0221 00:10:24.814543 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qbklf" Feb 21 00:10:25 crc kubenswrapper[4730]: I0221 00:10:25.531691 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerStarted","Data":"123bfe2acdbe9e91356587500e55b8ee65695af687bb95717966abf26e1256ed"} Feb 21 00:10:35 crc kubenswrapper[4730]: I0221 00:10:35.783627 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6"] Feb 21 00:10:35 crc kubenswrapper[4730]: I0221 00:10:35.784407 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" podUID="b2cd1a24-92ab-4849-b4f5-a836cda6da06" containerName="controller-manager" containerID="cri-o://60c6e5831c03ffb599abf49b7e95120524fd9b811d5ea9d261478593918773a1" gracePeriod=30 Feb 21 00:10:35 crc kubenswrapper[4730]: I0221 00:10:35.878776 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2"] Feb 21 00:10:35 crc kubenswrapper[4730]: I0221 00:10:35.879037 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" podUID="d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b" containerName="route-controller-manager" containerID="cri-o://015f1751189aed259578e594aa5e7f8bfc5ec74a1e5d6473f83cce63f94aa01d" gracePeriod=30 Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.342852 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.371662 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-config\") pod \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.371730 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-serving-cert\") pod \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.371787 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-client-ca\") pod \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.371931 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqmp5\" (UniqueName: \"kubernetes.io/projected/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-kube-api-access-fqmp5\") pod \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\" (UID: \"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b\") " Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.372587 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-config" (OuterVolumeSpecName: "config") pod "d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b" (UID: "d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.373595 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-client-ca" (OuterVolumeSpecName: "client-ca") pod "d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b" (UID: "d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.377614 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-kube-api-access-fqmp5" (OuterVolumeSpecName: "kube-api-access-fqmp5") pod "d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b" (UID: "d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b"). InnerVolumeSpecName "kube-api-access-fqmp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.377727 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b" (UID: "d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.394194 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.473227 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-client-ca\") pod \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.473343 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-proxy-ca-bundles\") pod \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.473371 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2cd1a24-92ab-4849-b4f5-a836cda6da06-serving-cert\") pod \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.473409 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5x7g\" (UniqueName: \"kubernetes.io/projected/b2cd1a24-92ab-4849-b4f5-a836cda6da06-kube-api-access-p5x7g\") pod \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.473452 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-config\") pod \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\" (UID: \"b2cd1a24-92ab-4849-b4f5-a836cda6da06\") " Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.473634 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.473649 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.473658 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.473666 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqmp5\" (UniqueName: \"kubernetes.io/projected/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b-kube-api-access-fqmp5\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.474529 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-client-ca" (OuterVolumeSpecName: "client-ca") pod "b2cd1a24-92ab-4849-b4f5-a836cda6da06" (UID: "b2cd1a24-92ab-4849-b4f5-a836cda6da06"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.474548 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-config" (OuterVolumeSpecName: "config") pod "b2cd1a24-92ab-4849-b4f5-a836cda6da06" (UID: "b2cd1a24-92ab-4849-b4f5-a836cda6da06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.475136 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b2cd1a24-92ab-4849-b4f5-a836cda6da06" (UID: "b2cd1a24-92ab-4849-b4f5-a836cda6da06"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.476801 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cd1a24-92ab-4849-b4f5-a836cda6da06-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b2cd1a24-92ab-4849-b4f5-a836cda6da06" (UID: "b2cd1a24-92ab-4849-b4f5-a836cda6da06"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.477059 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2cd1a24-92ab-4849-b4f5-a836cda6da06-kube-api-access-p5x7g" (OuterVolumeSpecName: "kube-api-access-p5x7g") pod "b2cd1a24-92ab-4849-b4f5-a836cda6da06" (UID: "b2cd1a24-92ab-4849-b4f5-a836cda6da06"). InnerVolumeSpecName "kube-api-access-p5x7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.574808 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.574845 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.574856 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2cd1a24-92ab-4849-b4f5-a836cda6da06-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.574864 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5x7g\" (UniqueName: \"kubernetes.io/projected/b2cd1a24-92ab-4849-b4f5-a836cda6da06-kube-api-access-p5x7g\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.574874 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2cd1a24-92ab-4849-b4f5-a836cda6da06-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.596360 4730 generic.go:334] "Generic (PLEG): container finished" podID="d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b" containerID="015f1751189aed259578e594aa5e7f8bfc5ec74a1e5d6473f83cce63f94aa01d" exitCode=0 Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.596444 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" event={"ID":"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b","Type":"ContainerDied","Data":"015f1751189aed259578e594aa5e7f8bfc5ec74a1e5d6473f83cce63f94aa01d"} Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.596494 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" event={"ID":"d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b","Type":"ContainerDied","Data":"6acd56af4ca0fe88942cac1fee31192cd087d8edc0916b60790909af4e8e6962"} Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.596516 4730 scope.go:117] "RemoveContainer" containerID="015f1751189aed259578e594aa5e7f8bfc5ec74a1e5d6473f83cce63f94aa01d" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.596670 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.601597 4730 generic.go:334] "Generic (PLEG): container finished" podID="b2cd1a24-92ab-4849-b4f5-a836cda6da06" containerID="60c6e5831c03ffb599abf49b7e95120524fd9b811d5ea9d261478593918773a1" exitCode=0 Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.601662 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" event={"ID":"b2cd1a24-92ab-4849-b4f5-a836cda6da06","Type":"ContainerDied","Data":"60c6e5831c03ffb599abf49b7e95120524fd9b811d5ea9d261478593918773a1"} Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.601693 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" event={"ID":"b2cd1a24-92ab-4849-b4f5-a836cda6da06","Type":"ContainerDied","Data":"e6ee068e657421824f9c423af003732fc7cbde9e28ccf715e56e2f7264aca989"} Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.601694 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.615451 4730 scope.go:117] "RemoveContainer" containerID="015f1751189aed259578e594aa5e7f8bfc5ec74a1e5d6473f83cce63f94aa01d" Feb 21 00:10:36 crc kubenswrapper[4730]: E0221 00:10:36.615938 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"015f1751189aed259578e594aa5e7f8bfc5ec74a1e5d6473f83cce63f94aa01d\": container with ID starting with 015f1751189aed259578e594aa5e7f8bfc5ec74a1e5d6473f83cce63f94aa01d not found: ID does not exist" containerID="015f1751189aed259578e594aa5e7f8bfc5ec74a1e5d6473f83cce63f94aa01d" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.615987 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"015f1751189aed259578e594aa5e7f8bfc5ec74a1e5d6473f83cce63f94aa01d"} err="failed to get container status \"015f1751189aed259578e594aa5e7f8bfc5ec74a1e5d6473f83cce63f94aa01d\": rpc error: code = NotFound desc = could not find container \"015f1751189aed259578e594aa5e7f8bfc5ec74a1e5d6473f83cce63f94aa01d\": container with ID starting with 015f1751189aed259578e594aa5e7f8bfc5ec74a1e5d6473f83cce63f94aa01d not found: ID does not exist" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.616008 4730 scope.go:117] "RemoveContainer" containerID="60c6e5831c03ffb599abf49b7e95120524fd9b811d5ea9d261478593918773a1" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.630439 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2"] Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.634816 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6f598978-m97q2"] Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.635416 4730 scope.go:117] "RemoveContainer" containerID="60c6e5831c03ffb599abf49b7e95120524fd9b811d5ea9d261478593918773a1" Feb 21 00:10:36 crc kubenswrapper[4730]: E0221 00:10:36.636447 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60c6e5831c03ffb599abf49b7e95120524fd9b811d5ea9d261478593918773a1\": container with ID starting with 60c6e5831c03ffb599abf49b7e95120524fd9b811d5ea9d261478593918773a1 not found: ID does not exist" containerID="60c6e5831c03ffb599abf49b7e95120524fd9b811d5ea9d261478593918773a1" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.636477 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c6e5831c03ffb599abf49b7e95120524fd9b811d5ea9d261478593918773a1"} err="failed to get container status \"60c6e5831c03ffb599abf49b7e95120524fd9b811d5ea9d261478593918773a1\": rpc error: code = NotFound desc = could not find container \"60c6e5831c03ffb599abf49b7e95120524fd9b811d5ea9d261478593918773a1\": container with ID starting with 60c6e5831c03ffb599abf49b7e95120524fd9b811d5ea9d261478593918773a1 not found: ID does not exist" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.638847 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6"] Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.640853 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5fc67c75cf-mzrt6"] Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.699478 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2cd1a24-92ab-4849-b4f5-a836cda6da06" path="/var/lib/kubelet/pods/b2cd1a24-92ab-4849-b4f5-a836cda6da06/volumes" Feb 21 00:10:36 crc kubenswrapper[4730]: I0221 00:10:36.700330 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b" path="/var/lib/kubelet/pods/d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b/volumes" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.224496 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d898cbf8d-5q8gz"] Feb 21 00:10:37 crc kubenswrapper[4730]: E0221 00:10:37.224914 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2cd1a24-92ab-4849-b4f5-a836cda6da06" containerName="controller-manager" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.224926 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2cd1a24-92ab-4849-b4f5-a836cda6da06" containerName="controller-manager" Feb 21 00:10:37 crc kubenswrapper[4730]: E0221 00:10:37.224936 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b" containerName="route-controller-manager" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.224957 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b" containerName="route-controller-manager" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.225098 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2cd1a24-92ab-4849-b4f5-a836cda6da06" containerName="controller-manager" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.225111 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0bcff17-a0ec-4a9a-b1cd-4476fe4ea79b" containerName="route-controller-manager" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.225368 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h"] Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.225746 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.226327 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.231530 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.231835 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.232046 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.232166 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.231529 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.232592 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.232651 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.232811 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.232896 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.232982 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.233056 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.233428 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.235773 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d898cbf8d-5q8gz"] Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.236711 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.247120 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h"] Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.283491 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj5b2\" (UniqueName: \"kubernetes.io/projected/3309b067-2d30-4406-b708-323cb693ee06-kube-api-access-zj5b2\") pod \"route-controller-manager-567f79b7f8-q7j8h\" (UID: \"3309b067-2d30-4406-b708-323cb693ee06\") " pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.283551 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3309b067-2d30-4406-b708-323cb693ee06-serving-cert\") pod \"route-controller-manager-567f79b7f8-q7j8h\" (UID: \"3309b067-2d30-4406-b708-323cb693ee06\") " pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.283609 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn8j4\" (UniqueName: \"kubernetes.io/projected/1e88edab-8573-4336-b21c-4871bd877853-kube-api-access-qn8j4\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.283639 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3309b067-2d30-4406-b708-323cb693ee06-client-ca\") pod \"route-controller-manager-567f79b7f8-q7j8h\" (UID: \"3309b067-2d30-4406-b708-323cb693ee06\") " pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.283667 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e88edab-8573-4336-b21c-4871bd877853-config\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.283690 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e88edab-8573-4336-b21c-4871bd877853-serving-cert\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.283722 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e88edab-8573-4336-b21c-4871bd877853-proxy-ca-bundles\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.283748 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3309b067-2d30-4406-b708-323cb693ee06-config\") pod \"route-controller-manager-567f79b7f8-q7j8h\" (UID: \"3309b067-2d30-4406-b708-323cb693ee06\") " pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.283772 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e88edab-8573-4336-b21c-4871bd877853-client-ca\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.384239 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e88edab-8573-4336-b21c-4871bd877853-client-ca\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.384313 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj5b2\" (UniqueName: \"kubernetes.io/projected/3309b067-2d30-4406-b708-323cb693ee06-kube-api-access-zj5b2\") pod \"route-controller-manager-567f79b7f8-q7j8h\" (UID: \"3309b067-2d30-4406-b708-323cb693ee06\") " pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.384342 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3309b067-2d30-4406-b708-323cb693ee06-serving-cert\") pod \"route-controller-manager-567f79b7f8-q7j8h\" (UID: \"3309b067-2d30-4406-b708-323cb693ee06\") " pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.384380 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn8j4\" (UniqueName: \"kubernetes.io/projected/1e88edab-8573-4336-b21c-4871bd877853-kube-api-access-qn8j4\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.384402 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3309b067-2d30-4406-b708-323cb693ee06-client-ca\") pod \"route-controller-manager-567f79b7f8-q7j8h\" (UID: \"3309b067-2d30-4406-b708-323cb693ee06\") " pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.384422 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e88edab-8573-4336-b21c-4871bd877853-config\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.384435 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e88edab-8573-4336-b21c-4871bd877853-serving-cert\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.384458 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e88edab-8573-4336-b21c-4871bd877853-proxy-ca-bundles\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.384477 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3309b067-2d30-4406-b708-323cb693ee06-config\") pod \"route-controller-manager-567f79b7f8-q7j8h\" (UID: \"3309b067-2d30-4406-b708-323cb693ee06\") " pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.385330 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e88edab-8573-4336-b21c-4871bd877853-client-ca\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.386616 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3309b067-2d30-4406-b708-323cb693ee06-config\") pod \"route-controller-manager-567f79b7f8-q7j8h\" (UID: \"3309b067-2d30-4406-b708-323cb693ee06\") " pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.386842 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e88edab-8573-4336-b21c-4871bd877853-config\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.386925 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3309b067-2d30-4406-b708-323cb693ee06-client-ca\") pod \"route-controller-manager-567f79b7f8-q7j8h\" (UID: \"3309b067-2d30-4406-b708-323cb693ee06\") " pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.388186 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e88edab-8573-4336-b21c-4871bd877853-proxy-ca-bundles\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.389393 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3309b067-2d30-4406-b708-323cb693ee06-serving-cert\") pod \"route-controller-manager-567f79b7f8-q7j8h\" (UID: \"3309b067-2d30-4406-b708-323cb693ee06\") " pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.391922 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e88edab-8573-4336-b21c-4871bd877853-serving-cert\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.400260 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj5b2\" (UniqueName: \"kubernetes.io/projected/3309b067-2d30-4406-b708-323cb693ee06-kube-api-access-zj5b2\") pod \"route-controller-manager-567f79b7f8-q7j8h\" (UID: \"3309b067-2d30-4406-b708-323cb693ee06\") " pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.403473 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn8j4\" (UniqueName: \"kubernetes.io/projected/1e88edab-8573-4336-b21c-4871bd877853-kube-api-access-qn8j4\") pod \"controller-manager-d898cbf8d-5q8gz\" (UID: \"1e88edab-8573-4336-b21c-4871bd877853\") " pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.544986 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.552524 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:37 crc kubenswrapper[4730]: I0221 00:10:37.998343 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d898cbf8d-5q8gz"] Feb 21 00:10:38 crc kubenswrapper[4730]: I0221 00:10:38.033541 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h"] Feb 21 00:10:38 crc kubenswrapper[4730]: I0221 00:10:38.618158 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" event={"ID":"1e88edab-8573-4336-b21c-4871bd877853","Type":"ContainerStarted","Data":"06536b56d641f229c5418b696d2d45edef60f3d109357f1119a2ce533d7ac433"} Feb 21 00:10:38 crc kubenswrapper[4730]: I0221 00:10:38.618408 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" event={"ID":"1e88edab-8573-4336-b21c-4871bd877853","Type":"ContainerStarted","Data":"304bc96fe4cbdab2f4e8d06c104d1270440126188686b9a472b448e913f443d6"} Feb 21 00:10:38 crc kubenswrapper[4730]: I0221 00:10:38.618425 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:38 crc kubenswrapper[4730]: I0221 00:10:38.620188 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" event={"ID":"3309b067-2d30-4406-b708-323cb693ee06","Type":"ContainerStarted","Data":"cb71e6762530e01ac5fe5415eb3b1d61d61e4c1e90edad4c92fb9bb8085c634d"} Feb 21 00:10:38 crc kubenswrapper[4730]: I0221 00:10:38.620239 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" event={"ID":"3309b067-2d30-4406-b708-323cb693ee06","Type":"ContainerStarted","Data":"1776dd46cc7cf5b3f4c01407634314f1119c84bef63a1b0c7bfdb0afd67e7bb1"} Feb 21 00:10:38 crc kubenswrapper[4730]: I0221 00:10:38.620422 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:38 crc kubenswrapper[4730]: I0221 00:10:38.625154 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" Feb 21 00:10:38 crc kubenswrapper[4730]: I0221 00:10:38.625270 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" Feb 21 00:10:38 crc kubenswrapper[4730]: I0221 00:10:38.635717 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d898cbf8d-5q8gz" podStartSLOduration=3.635702657 podStartE2EDuration="3.635702657s" podCreationTimestamp="2026-02-21 00:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:10:38.633678799 +0000 UTC m=+240.645245734" watchObservedRunningTime="2026-02-21 00:10:38.635702657 +0000 UTC m=+240.647269592" Feb 21 00:10:38 crc kubenswrapper[4730]: I0221 00:10:38.649048 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-567f79b7f8-q7j8h" podStartSLOduration=3.64903245 podStartE2EDuration="3.64903245s" podCreationTimestamp="2026-02-21 00:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:10:38.648380187 +0000 UTC m=+240.659947122" watchObservedRunningTime="2026-02-21 00:10:38.64903245 +0000 UTC m=+240.660599385" Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.674790 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zr5rw"] Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.677924 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zr5rw" podUID="406888dc-7d00-47e2-8c63-05e0106525e1" containerName="registry-server" containerID="cri-o://53754d8c17042b805d7249144197106d8b64b93e85e4ff2976778ff80cb11ce2" gracePeriod=30 Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.690126 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-95vbk"] Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.696515 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-95vbk" podUID="6298cfe2-11f4-453f-9cfc-63aceb67b191" containerName="registry-server" containerID="cri-o://1ada47546cd308b165d5699a80e248e3a00c10d5b8ee2733db6a32a59e1549b8" gracePeriod=30 Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.697229 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pt6nb"] Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.697476 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" podUID="afba91ef-7949-490e-9903-0751d7f84d27" containerName="marketplace-operator" containerID="cri-o://2dc940bbd4c4486f1da4e68fcc126f846fc36b9fa3923dfae8241f3b4134e8db" gracePeriod=30 Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.701078 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmq2r"] Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.701374 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pmq2r" podUID="c5b0a1f8-4598-4fb8-982c-91a3e6699c33" containerName="registry-server" containerID="cri-o://839ec09805f019b2e1bcc08a430d93e8386196791db6bc3f1158dda8e1d5d6d4" gracePeriod=30 Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.707799 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ffq6b"] Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.708041 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ffq6b" podUID="3ed145f4-5a44-4d9c-8287-a9273b31559a" containerName="registry-server" containerID="cri-o://9427f8a1cbf6641162ddeb42f1bc7e5b8b120cc7cc60b3e210ecca06aeda4377" gracePeriod=30 Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.712527 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-snqnv"] Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.713195 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.724333 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-snqnv"] Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.747134 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69773a41-0e64-40ec-913e-be1b7abff235-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-snqnv\" (UID: \"69773a41-0e64-40ec-913e-be1b7abff235\") " pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.747210 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94htr\" (UniqueName: \"kubernetes.io/projected/69773a41-0e64-40ec-913e-be1b7abff235-kube-api-access-94htr\") pod \"marketplace-operator-79b997595-snqnv\" (UID: \"69773a41-0e64-40ec-913e-be1b7abff235\") " pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.747311 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69773a41-0e64-40ec-913e-be1b7abff235-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-snqnv\" (UID: \"69773a41-0e64-40ec-913e-be1b7abff235\") " pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.850085 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69773a41-0e64-40ec-913e-be1b7abff235-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-snqnv\" (UID: \"69773a41-0e64-40ec-913e-be1b7abff235\") " pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.850586 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69773a41-0e64-40ec-913e-be1b7abff235-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-snqnv\" (UID: \"69773a41-0e64-40ec-913e-be1b7abff235\") " pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.850642 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94htr\" (UniqueName: \"kubernetes.io/projected/69773a41-0e64-40ec-913e-be1b7abff235-kube-api-access-94htr\") pod \"marketplace-operator-79b997595-snqnv\" (UID: \"69773a41-0e64-40ec-913e-be1b7abff235\") " pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.851746 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69773a41-0e64-40ec-913e-be1b7abff235-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-snqnv\" (UID: \"69773a41-0e64-40ec-913e-be1b7abff235\") " pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.857214 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69773a41-0e64-40ec-913e-be1b7abff235-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-snqnv\" (UID: \"69773a41-0e64-40ec-913e-be1b7abff235\") " pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" Feb 21 00:10:41 crc kubenswrapper[4730]: I0221 00:10:41.868871 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94htr\" (UniqueName: \"kubernetes.io/projected/69773a41-0e64-40ec-913e-be1b7abff235-kube-api-access-94htr\") pod \"marketplace-operator-79b997595-snqnv\" (UID: \"69773a41-0e64-40ec-913e-be1b7abff235\") " pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.037977 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.183125 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.254067 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406888dc-7d00-47e2-8c63-05e0106525e1-catalog-content\") pod \"406888dc-7d00-47e2-8c63-05e0106525e1\" (UID: \"406888dc-7d00-47e2-8c63-05e0106525e1\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.254135 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx6kw\" (UniqueName: \"kubernetes.io/projected/406888dc-7d00-47e2-8c63-05e0106525e1-kube-api-access-fx6kw\") pod \"406888dc-7d00-47e2-8c63-05e0106525e1\" (UID: \"406888dc-7d00-47e2-8c63-05e0106525e1\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.254207 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406888dc-7d00-47e2-8c63-05e0106525e1-utilities\") pod \"406888dc-7d00-47e2-8c63-05e0106525e1\" (UID: \"406888dc-7d00-47e2-8c63-05e0106525e1\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.255762 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406888dc-7d00-47e2-8c63-05e0106525e1-utilities" (OuterVolumeSpecName: "utilities") pod "406888dc-7d00-47e2-8c63-05e0106525e1" (UID: "406888dc-7d00-47e2-8c63-05e0106525e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.262231 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406888dc-7d00-47e2-8c63-05e0106525e1-kube-api-access-fx6kw" (OuterVolumeSpecName: "kube-api-access-fx6kw") pod "406888dc-7d00-47e2-8c63-05e0106525e1" (UID: "406888dc-7d00-47e2-8c63-05e0106525e1"). InnerVolumeSpecName "kube-api-access-fx6kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.291975 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.313851 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.322476 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.323384 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.353236 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406888dc-7d00-47e2-8c63-05e0106525e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "406888dc-7d00-47e2-8c63-05e0106525e1" (UID: "406888dc-7d00-47e2-8c63-05e0106525e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.356338 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/afba91ef-7949-490e-9903-0751d7f84d27-marketplace-trusted-ca\") pod \"afba91ef-7949-490e-9903-0751d7f84d27\" (UID: \"afba91ef-7949-490e-9903-0751d7f84d27\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.356413 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ed145f4-5a44-4d9c-8287-a9273b31559a-catalog-content\") pod \"3ed145f4-5a44-4d9c-8287-a9273b31559a\" (UID: \"3ed145f4-5a44-4d9c-8287-a9273b31559a\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.356445 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-catalog-content\") pod \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\" (UID: \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.356475 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-utilities\") pod \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\" (UID: \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.356494 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jkxt\" (UniqueName: \"kubernetes.io/projected/6298cfe2-11f4-453f-9cfc-63aceb67b191-kube-api-access-8jkxt\") pod \"6298cfe2-11f4-453f-9cfc-63aceb67b191\" (UID: \"6298cfe2-11f4-453f-9cfc-63aceb67b191\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.356511 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctvcq\" (UniqueName: \"kubernetes.io/projected/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-kube-api-access-ctvcq\") pod \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\" (UID: \"c5b0a1f8-4598-4fb8-982c-91a3e6699c33\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.356541 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ed145f4-5a44-4d9c-8287-a9273b31559a-utilities\") pod \"3ed145f4-5a44-4d9c-8287-a9273b31559a\" (UID: \"3ed145f4-5a44-4d9c-8287-a9273b31559a\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.356576 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8jsk\" (UniqueName: \"kubernetes.io/projected/afba91ef-7949-490e-9903-0751d7f84d27-kube-api-access-g8jsk\") pod \"afba91ef-7949-490e-9903-0751d7f84d27\" (UID: \"afba91ef-7949-490e-9903-0751d7f84d27\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.356601 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/afba91ef-7949-490e-9903-0751d7f84d27-marketplace-operator-metrics\") pod \"afba91ef-7949-490e-9903-0751d7f84d27\" (UID: \"afba91ef-7949-490e-9903-0751d7f84d27\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.356630 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6298cfe2-11f4-453f-9cfc-63aceb67b191-catalog-content\") pod \"6298cfe2-11f4-453f-9cfc-63aceb67b191\" (UID: \"6298cfe2-11f4-453f-9cfc-63aceb67b191\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.356657 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2vrh\" (UniqueName: \"kubernetes.io/projected/3ed145f4-5a44-4d9c-8287-a9273b31559a-kube-api-access-s2vrh\") pod \"3ed145f4-5a44-4d9c-8287-a9273b31559a\" (UID: \"3ed145f4-5a44-4d9c-8287-a9273b31559a\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.356679 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6298cfe2-11f4-453f-9cfc-63aceb67b191-utilities\") pod \"6298cfe2-11f4-453f-9cfc-63aceb67b191\" (UID: \"6298cfe2-11f4-453f-9cfc-63aceb67b191\") " Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.357461 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406888dc-7d00-47e2-8c63-05e0106525e1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.357491 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx6kw\" (UniqueName: \"kubernetes.io/projected/406888dc-7d00-47e2-8c63-05e0106525e1-kube-api-access-fx6kw\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.357502 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406888dc-7d00-47e2-8c63-05e0106525e1-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.357799 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afba91ef-7949-490e-9903-0751d7f84d27-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "afba91ef-7949-490e-9903-0751d7f84d27" (UID: "afba91ef-7949-490e-9903-0751d7f84d27"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.358182 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-utilities" (OuterVolumeSpecName: "utilities") pod "c5b0a1f8-4598-4fb8-982c-91a3e6699c33" (UID: "c5b0a1f8-4598-4fb8-982c-91a3e6699c33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.365883 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ed145f4-5a44-4d9c-8287-a9273b31559a-utilities" (OuterVolumeSpecName: "utilities") pod "3ed145f4-5a44-4d9c-8287-a9273b31559a" (UID: "3ed145f4-5a44-4d9c-8287-a9273b31559a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.387664 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6298cfe2-11f4-453f-9cfc-63aceb67b191-utilities" (OuterVolumeSpecName: "utilities") pod "6298cfe2-11f4-453f-9cfc-63aceb67b191" (UID: "6298cfe2-11f4-453f-9cfc-63aceb67b191"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.391352 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6298cfe2-11f4-453f-9cfc-63aceb67b191-kube-api-access-8jkxt" (OuterVolumeSpecName: "kube-api-access-8jkxt") pod "6298cfe2-11f4-453f-9cfc-63aceb67b191" (UID: "6298cfe2-11f4-453f-9cfc-63aceb67b191"). InnerVolumeSpecName "kube-api-access-8jkxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.393430 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5b0a1f8-4598-4fb8-982c-91a3e6699c33" (UID: "c5b0a1f8-4598-4fb8-982c-91a3e6699c33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.394114 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed145f4-5a44-4d9c-8287-a9273b31559a-kube-api-access-s2vrh" (OuterVolumeSpecName: "kube-api-access-s2vrh") pod "3ed145f4-5a44-4d9c-8287-a9273b31559a" (UID: "3ed145f4-5a44-4d9c-8287-a9273b31559a"). InnerVolumeSpecName "kube-api-access-s2vrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.394237 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afba91ef-7949-490e-9903-0751d7f84d27-kube-api-access-g8jsk" (OuterVolumeSpecName: "kube-api-access-g8jsk") pod "afba91ef-7949-490e-9903-0751d7f84d27" (UID: "afba91ef-7949-490e-9903-0751d7f84d27"). InnerVolumeSpecName "kube-api-access-g8jsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.394528 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-kube-api-access-ctvcq" (OuterVolumeSpecName: "kube-api-access-ctvcq") pod "c5b0a1f8-4598-4fb8-982c-91a3e6699c33" (UID: "c5b0a1f8-4598-4fb8-982c-91a3e6699c33"). InnerVolumeSpecName "kube-api-access-ctvcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.395063 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afba91ef-7949-490e-9903-0751d7f84d27-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "afba91ef-7949-490e-9903-0751d7f84d27" (UID: "afba91ef-7949-490e-9903-0751d7f84d27"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.442140 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6298cfe2-11f4-453f-9cfc-63aceb67b191-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6298cfe2-11f4-453f-9cfc-63aceb67b191" (UID: "6298cfe2-11f4-453f-9cfc-63aceb67b191"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.458662 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2vrh\" (UniqueName: \"kubernetes.io/projected/3ed145f4-5a44-4d9c-8287-a9273b31559a-kube-api-access-s2vrh\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.458696 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6298cfe2-11f4-453f-9cfc-63aceb67b191-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.458710 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/afba91ef-7949-490e-9903-0751d7f84d27-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.458722 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.458733 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jkxt\" (UniqueName: \"kubernetes.io/projected/6298cfe2-11f4-453f-9cfc-63aceb67b191-kube-api-access-8jkxt\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.458742 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.458749 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctvcq\" (UniqueName: \"kubernetes.io/projected/c5b0a1f8-4598-4fb8-982c-91a3e6699c33-kube-api-access-ctvcq\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.458757 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ed145f4-5a44-4d9c-8287-a9273b31559a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.458764 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8jsk\" (UniqueName: \"kubernetes.io/projected/afba91ef-7949-490e-9903-0751d7f84d27-kube-api-access-g8jsk\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.458772 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/afba91ef-7949-490e-9903-0751d7f84d27-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.458784 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6298cfe2-11f4-453f-9cfc-63aceb67b191-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.497688 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ed145f4-5a44-4d9c-8287-a9273b31559a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ed145f4-5a44-4d9c-8287-a9273b31559a" (UID: "3ed145f4-5a44-4d9c-8287-a9273b31559a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.560274 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ed145f4-5a44-4d9c-8287-a9273b31559a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.577535 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-snqnv"] Feb 21 00:10:42 crc kubenswrapper[4730]: W0221 00:10:42.581633 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69773a41_0e64_40ec_913e_be1b7abff235.slice/crio-68b38470346fbad29287f608bc1e48625c9a7f32e978592c756860cf1565a609 WatchSource:0}: Error finding container 68b38470346fbad29287f608bc1e48625c9a7f32e978592c756860cf1565a609: Status 404 returned error can't find the container with id 68b38470346fbad29287f608bc1e48625c9a7f32e978592c756860cf1565a609 Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.641225 4730 generic.go:334] "Generic (PLEG): container finished" podID="afba91ef-7949-490e-9903-0751d7f84d27" containerID="2dc940bbd4c4486f1da4e68fcc126f846fc36b9fa3923dfae8241f3b4134e8db" exitCode=0 Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.641300 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.641321 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" event={"ID":"afba91ef-7949-490e-9903-0751d7f84d27","Type":"ContainerDied","Data":"2dc940bbd4c4486f1da4e68fcc126f846fc36b9fa3923dfae8241f3b4134e8db"} Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.641367 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pt6nb" event={"ID":"afba91ef-7949-490e-9903-0751d7f84d27","Type":"ContainerDied","Data":"c39aa8a9b89f48b525a6971b911f85436a061489c07784ca7ee799f5569e9844"} Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.641386 4730 scope.go:117] "RemoveContainer" containerID="2dc940bbd4c4486f1da4e68fcc126f846fc36b9fa3923dfae8241f3b4134e8db" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.647237 4730 generic.go:334] "Generic (PLEG): container finished" podID="3ed145f4-5a44-4d9c-8287-a9273b31559a" containerID="9427f8a1cbf6641162ddeb42f1bc7e5b8b120cc7cc60b3e210ecca06aeda4377" exitCode=0 Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.647325 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffq6b" event={"ID":"3ed145f4-5a44-4d9c-8287-a9273b31559a","Type":"ContainerDied","Data":"9427f8a1cbf6641162ddeb42f1bc7e5b8b120cc7cc60b3e210ecca06aeda4377"} Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.647351 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffq6b" event={"ID":"3ed145f4-5a44-4d9c-8287-a9273b31559a","Type":"ContainerDied","Data":"a9266cf6ed47805fd3bba560a47807ba57a9419c9effc1fca2679a4453c7d9ea"} Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.647433 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffq6b" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.653341 4730 generic.go:334] "Generic (PLEG): container finished" podID="c5b0a1f8-4598-4fb8-982c-91a3e6699c33" containerID="839ec09805f019b2e1bcc08a430d93e8386196791db6bc3f1158dda8e1d5d6d4" exitCode=0 Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.653421 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmq2r" event={"ID":"c5b0a1f8-4598-4fb8-982c-91a3e6699c33","Type":"ContainerDied","Data":"839ec09805f019b2e1bcc08a430d93e8386196791db6bc3f1158dda8e1d5d6d4"} Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.653448 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmq2r" event={"ID":"c5b0a1f8-4598-4fb8-982c-91a3e6699c33","Type":"ContainerDied","Data":"f4ccc8e5f2ca01e888d2c7010ff9932e0141b2c74b6d1ad360b00d1b8d5e0eed"} Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.653548 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmq2r" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.654697 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" event={"ID":"69773a41-0e64-40ec-913e-be1b7abff235","Type":"ContainerStarted","Data":"68b38470346fbad29287f608bc1e48625c9a7f32e978592c756860cf1565a609"} Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.665578 4730 generic.go:334] "Generic (PLEG): container finished" podID="6298cfe2-11f4-453f-9cfc-63aceb67b191" containerID="1ada47546cd308b165d5699a80e248e3a00c10d5b8ee2733db6a32a59e1549b8" exitCode=0 Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.666093 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95vbk" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.667329 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95vbk" event={"ID":"6298cfe2-11f4-453f-9cfc-63aceb67b191","Type":"ContainerDied","Data":"1ada47546cd308b165d5699a80e248e3a00c10d5b8ee2733db6a32a59e1549b8"} Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.667360 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95vbk" event={"ID":"6298cfe2-11f4-453f-9cfc-63aceb67b191","Type":"ContainerDied","Data":"83b8769bb988fb24d5f075c33a5733f5141ae57f9c61254ada6b154d5a1b5119"} Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.672257 4730 scope.go:117] "RemoveContainer" containerID="2dc940bbd4c4486f1da4e68fcc126f846fc36b9fa3923dfae8241f3b4134e8db" Feb 21 00:10:42 crc kubenswrapper[4730]: E0221 00:10:42.672643 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dc940bbd4c4486f1da4e68fcc126f846fc36b9fa3923dfae8241f3b4134e8db\": container with ID starting with 2dc940bbd4c4486f1da4e68fcc126f846fc36b9fa3923dfae8241f3b4134e8db not found: ID does not exist" containerID="2dc940bbd4c4486f1da4e68fcc126f846fc36b9fa3923dfae8241f3b4134e8db" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.672673 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dc940bbd4c4486f1da4e68fcc126f846fc36b9fa3923dfae8241f3b4134e8db"} err="failed to get container status \"2dc940bbd4c4486f1da4e68fcc126f846fc36b9fa3923dfae8241f3b4134e8db\": rpc error: code = NotFound desc = could not find container \"2dc940bbd4c4486f1da4e68fcc126f846fc36b9fa3923dfae8241f3b4134e8db\": container with ID starting with 2dc940bbd4c4486f1da4e68fcc126f846fc36b9fa3923dfae8241f3b4134e8db not found: ID does not exist" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.672692 4730 scope.go:117] "RemoveContainer" containerID="9427f8a1cbf6641162ddeb42f1bc7e5b8b120cc7cc60b3e210ecca06aeda4377" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.672768 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pt6nb"] Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.675222 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pt6nb"] Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.675281 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr5rw" event={"ID":"406888dc-7d00-47e2-8c63-05e0106525e1","Type":"ContainerDied","Data":"53754d8c17042b805d7249144197106d8b64b93e85e4ff2976778ff80cb11ce2"} Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.675228 4730 generic.go:334] "Generic (PLEG): container finished" podID="406888dc-7d00-47e2-8c63-05e0106525e1" containerID="53754d8c17042b805d7249144197106d8b64b93e85e4ff2976778ff80cb11ce2" exitCode=0 Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.675311 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr5rw" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.675323 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr5rw" event={"ID":"406888dc-7d00-47e2-8c63-05e0106525e1","Type":"ContainerDied","Data":"4e99e843e2b726988a5fedfa42b2b525ccd0cd44742d921accf72977afc7181e"} Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.689983 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ffq6b"] Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.692615 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ffq6b"] Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.695363 4730 scope.go:117] "RemoveContainer" containerID="eb71eb927f3e43e0ab56b1fa6fdb01f2bbc8322be7db6aaa32bb6afc8627c8af" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.710804 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed145f4-5a44-4d9c-8287-a9273b31559a" path="/var/lib/kubelet/pods/3ed145f4-5a44-4d9c-8287-a9273b31559a/volumes" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.713439 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afba91ef-7949-490e-9903-0751d7f84d27" path="/var/lib/kubelet/pods/afba91ef-7949-490e-9903-0751d7f84d27/volumes" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.714910 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmq2r"] Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.714938 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmq2r"] Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.722202 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zr5rw"] Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.727724 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zr5rw"] Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.734397 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-95vbk"] Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.735961 4730 scope.go:117] "RemoveContainer" containerID="530fc43610554a192fcb1653abeef62c1d6a5b18d61c9ff27f59dc17d6ed4a47" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.736780 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-95vbk"] Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.748901 4730 scope.go:117] "RemoveContainer" containerID="9427f8a1cbf6641162ddeb42f1bc7e5b8b120cc7cc60b3e210ecca06aeda4377" Feb 21 00:10:42 crc kubenswrapper[4730]: E0221 00:10:42.749214 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9427f8a1cbf6641162ddeb42f1bc7e5b8b120cc7cc60b3e210ecca06aeda4377\": container with ID starting with 9427f8a1cbf6641162ddeb42f1bc7e5b8b120cc7cc60b3e210ecca06aeda4377 not found: ID does not exist" containerID="9427f8a1cbf6641162ddeb42f1bc7e5b8b120cc7cc60b3e210ecca06aeda4377" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.749274 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9427f8a1cbf6641162ddeb42f1bc7e5b8b120cc7cc60b3e210ecca06aeda4377"} err="failed to get container status \"9427f8a1cbf6641162ddeb42f1bc7e5b8b120cc7cc60b3e210ecca06aeda4377\": rpc error: code = NotFound desc = could not find container \"9427f8a1cbf6641162ddeb42f1bc7e5b8b120cc7cc60b3e210ecca06aeda4377\": container with ID starting with 9427f8a1cbf6641162ddeb42f1bc7e5b8b120cc7cc60b3e210ecca06aeda4377 not found: ID does not exist" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.749294 4730 scope.go:117] "RemoveContainer" containerID="eb71eb927f3e43e0ab56b1fa6fdb01f2bbc8322be7db6aaa32bb6afc8627c8af" Feb 21 00:10:42 crc kubenswrapper[4730]: E0221 00:10:42.749643 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb71eb927f3e43e0ab56b1fa6fdb01f2bbc8322be7db6aaa32bb6afc8627c8af\": container with ID starting with eb71eb927f3e43e0ab56b1fa6fdb01f2bbc8322be7db6aaa32bb6afc8627c8af not found: ID does not exist" containerID="eb71eb927f3e43e0ab56b1fa6fdb01f2bbc8322be7db6aaa32bb6afc8627c8af" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.749663 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb71eb927f3e43e0ab56b1fa6fdb01f2bbc8322be7db6aaa32bb6afc8627c8af"} err="failed to get container status \"eb71eb927f3e43e0ab56b1fa6fdb01f2bbc8322be7db6aaa32bb6afc8627c8af\": rpc error: code = NotFound desc = could not find container \"eb71eb927f3e43e0ab56b1fa6fdb01f2bbc8322be7db6aaa32bb6afc8627c8af\": container with ID starting with eb71eb927f3e43e0ab56b1fa6fdb01f2bbc8322be7db6aaa32bb6afc8627c8af not found: ID does not exist" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.749675 4730 scope.go:117] "RemoveContainer" containerID="530fc43610554a192fcb1653abeef62c1d6a5b18d61c9ff27f59dc17d6ed4a47" Feb 21 00:10:42 crc kubenswrapper[4730]: E0221 00:10:42.750036 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"530fc43610554a192fcb1653abeef62c1d6a5b18d61c9ff27f59dc17d6ed4a47\": container with ID starting with 530fc43610554a192fcb1653abeef62c1d6a5b18d61c9ff27f59dc17d6ed4a47 not found: ID does not exist" containerID="530fc43610554a192fcb1653abeef62c1d6a5b18d61c9ff27f59dc17d6ed4a47" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.750057 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530fc43610554a192fcb1653abeef62c1d6a5b18d61c9ff27f59dc17d6ed4a47"} err="failed to get container status \"530fc43610554a192fcb1653abeef62c1d6a5b18d61c9ff27f59dc17d6ed4a47\": rpc error: code = NotFound desc = could not find container \"530fc43610554a192fcb1653abeef62c1d6a5b18d61c9ff27f59dc17d6ed4a47\": container with ID starting with 530fc43610554a192fcb1653abeef62c1d6a5b18d61c9ff27f59dc17d6ed4a47 not found: ID does not exist" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.750071 4730 scope.go:117] "RemoveContainer" containerID="839ec09805f019b2e1bcc08a430d93e8386196791db6bc3f1158dda8e1d5d6d4" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.760387 4730 scope.go:117] "RemoveContainer" containerID="c73cee1529bd09241d2a01f93de18dfde3461250a9ba54fc7affda6750275a39" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.775738 4730 scope.go:117] "RemoveContainer" containerID="d498d5d6cbedab1e6781b94f1cf262e6b61e29a4be258c507c4b859981579965" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.790322 4730 scope.go:117] "RemoveContainer" containerID="839ec09805f019b2e1bcc08a430d93e8386196791db6bc3f1158dda8e1d5d6d4" Feb 21 00:10:42 crc kubenswrapper[4730]: E0221 00:10:42.791152 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"839ec09805f019b2e1bcc08a430d93e8386196791db6bc3f1158dda8e1d5d6d4\": container with ID starting with 839ec09805f019b2e1bcc08a430d93e8386196791db6bc3f1158dda8e1d5d6d4 not found: ID does not exist" containerID="839ec09805f019b2e1bcc08a430d93e8386196791db6bc3f1158dda8e1d5d6d4" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.791194 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839ec09805f019b2e1bcc08a430d93e8386196791db6bc3f1158dda8e1d5d6d4"} err="failed to get container status \"839ec09805f019b2e1bcc08a430d93e8386196791db6bc3f1158dda8e1d5d6d4\": rpc error: code = NotFound desc = could not find container \"839ec09805f019b2e1bcc08a430d93e8386196791db6bc3f1158dda8e1d5d6d4\": container with ID starting with 839ec09805f019b2e1bcc08a430d93e8386196791db6bc3f1158dda8e1d5d6d4 not found: ID does not exist" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.791220 4730 scope.go:117] "RemoveContainer" containerID="c73cee1529bd09241d2a01f93de18dfde3461250a9ba54fc7affda6750275a39" Feb 21 00:10:42 crc kubenswrapper[4730]: E0221 00:10:42.791613 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c73cee1529bd09241d2a01f93de18dfde3461250a9ba54fc7affda6750275a39\": container with ID starting with c73cee1529bd09241d2a01f93de18dfde3461250a9ba54fc7affda6750275a39 not found: ID does not exist" containerID="c73cee1529bd09241d2a01f93de18dfde3461250a9ba54fc7affda6750275a39" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.791652 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73cee1529bd09241d2a01f93de18dfde3461250a9ba54fc7affda6750275a39"} err="failed to get container status \"c73cee1529bd09241d2a01f93de18dfde3461250a9ba54fc7affda6750275a39\": rpc error: code = NotFound desc = could not find container \"c73cee1529bd09241d2a01f93de18dfde3461250a9ba54fc7affda6750275a39\": container with ID starting with c73cee1529bd09241d2a01f93de18dfde3461250a9ba54fc7affda6750275a39 not found: ID does not exist" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.791679 4730 scope.go:117] "RemoveContainer" containerID="d498d5d6cbedab1e6781b94f1cf262e6b61e29a4be258c507c4b859981579965" Feb 21 00:10:42 crc kubenswrapper[4730]: E0221 00:10:42.791930 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d498d5d6cbedab1e6781b94f1cf262e6b61e29a4be258c507c4b859981579965\": container with ID starting with d498d5d6cbedab1e6781b94f1cf262e6b61e29a4be258c507c4b859981579965 not found: ID does not exist" containerID="d498d5d6cbedab1e6781b94f1cf262e6b61e29a4be258c507c4b859981579965" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.791967 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d498d5d6cbedab1e6781b94f1cf262e6b61e29a4be258c507c4b859981579965"} err="failed to get container status \"d498d5d6cbedab1e6781b94f1cf262e6b61e29a4be258c507c4b859981579965\": rpc error: code = NotFound desc = could not find container \"d498d5d6cbedab1e6781b94f1cf262e6b61e29a4be258c507c4b859981579965\": container with ID starting with d498d5d6cbedab1e6781b94f1cf262e6b61e29a4be258c507c4b859981579965 not found: ID does not exist" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.791982 4730 scope.go:117] "RemoveContainer" containerID="1ada47546cd308b165d5699a80e248e3a00c10d5b8ee2733db6a32a59e1549b8" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.807742 4730 scope.go:117] "RemoveContainer" containerID="daf81fd740b9bc1546e1493a9ecef0bcb9f75bcd823427bbd8e6ee45ea71d2fc" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.824174 4730 scope.go:117] "RemoveContainer" containerID="dec5202c5fd49b5eb8418fab74c6abb17b90bcfc5d9d20a6baa3257fec22fca0" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.873988 4730 scope.go:117] "RemoveContainer" containerID="1ada47546cd308b165d5699a80e248e3a00c10d5b8ee2733db6a32a59e1549b8" Feb 21 00:10:42 crc kubenswrapper[4730]: E0221 00:10:42.874901 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ada47546cd308b165d5699a80e248e3a00c10d5b8ee2733db6a32a59e1549b8\": container with ID starting with 1ada47546cd308b165d5699a80e248e3a00c10d5b8ee2733db6a32a59e1549b8 not found: ID does not exist" containerID="1ada47546cd308b165d5699a80e248e3a00c10d5b8ee2733db6a32a59e1549b8" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.874957 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ada47546cd308b165d5699a80e248e3a00c10d5b8ee2733db6a32a59e1549b8"} err="failed to get container status \"1ada47546cd308b165d5699a80e248e3a00c10d5b8ee2733db6a32a59e1549b8\": rpc error: code = NotFound desc = could not find container \"1ada47546cd308b165d5699a80e248e3a00c10d5b8ee2733db6a32a59e1549b8\": container with ID starting with 1ada47546cd308b165d5699a80e248e3a00c10d5b8ee2733db6a32a59e1549b8 not found: ID does not exist" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.874987 4730 scope.go:117] "RemoveContainer" containerID="daf81fd740b9bc1546e1493a9ecef0bcb9f75bcd823427bbd8e6ee45ea71d2fc" Feb 21 00:10:42 crc kubenswrapper[4730]: E0221 00:10:42.875389 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf81fd740b9bc1546e1493a9ecef0bcb9f75bcd823427bbd8e6ee45ea71d2fc\": container with ID starting with daf81fd740b9bc1546e1493a9ecef0bcb9f75bcd823427bbd8e6ee45ea71d2fc not found: ID does not exist" containerID="daf81fd740b9bc1546e1493a9ecef0bcb9f75bcd823427bbd8e6ee45ea71d2fc" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.875428 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf81fd740b9bc1546e1493a9ecef0bcb9f75bcd823427bbd8e6ee45ea71d2fc"} err="failed to get container status \"daf81fd740b9bc1546e1493a9ecef0bcb9f75bcd823427bbd8e6ee45ea71d2fc\": rpc error: code = NotFound desc = could not find container \"daf81fd740b9bc1546e1493a9ecef0bcb9f75bcd823427bbd8e6ee45ea71d2fc\": container with ID starting with daf81fd740b9bc1546e1493a9ecef0bcb9f75bcd823427bbd8e6ee45ea71d2fc not found: ID does not exist" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.875455 4730 scope.go:117] "RemoveContainer" containerID="dec5202c5fd49b5eb8418fab74c6abb17b90bcfc5d9d20a6baa3257fec22fca0" Feb 21 00:10:42 crc kubenswrapper[4730]: E0221 00:10:42.875730 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dec5202c5fd49b5eb8418fab74c6abb17b90bcfc5d9d20a6baa3257fec22fca0\": container with ID starting with dec5202c5fd49b5eb8418fab74c6abb17b90bcfc5d9d20a6baa3257fec22fca0 not found: ID does not exist" containerID="dec5202c5fd49b5eb8418fab74c6abb17b90bcfc5d9d20a6baa3257fec22fca0" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.875756 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dec5202c5fd49b5eb8418fab74c6abb17b90bcfc5d9d20a6baa3257fec22fca0"} err="failed to get container status \"dec5202c5fd49b5eb8418fab74c6abb17b90bcfc5d9d20a6baa3257fec22fca0\": rpc error: code = NotFound desc = could not find container \"dec5202c5fd49b5eb8418fab74c6abb17b90bcfc5d9d20a6baa3257fec22fca0\": container with ID starting with dec5202c5fd49b5eb8418fab74c6abb17b90bcfc5d9d20a6baa3257fec22fca0 not found: ID does not exist" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.875783 4730 scope.go:117] "RemoveContainer" containerID="53754d8c17042b805d7249144197106d8b64b93e85e4ff2976778ff80cb11ce2" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.894038 4730 scope.go:117] "RemoveContainer" containerID="1f7d81ec102b99f5fac1bb8182fb6aed2ad16db2cceb57c27c6c17ddb57c4e44" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.915440 4730 scope.go:117] "RemoveContainer" containerID="fc0436bf3370a77ae3703fb5bf0f2a2af335e5cc15b8c6cef755e99763f6b71b" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.931180 4730 scope.go:117] "RemoveContainer" containerID="53754d8c17042b805d7249144197106d8b64b93e85e4ff2976778ff80cb11ce2" Feb 21 00:10:42 crc kubenswrapper[4730]: E0221 00:10:42.931581 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53754d8c17042b805d7249144197106d8b64b93e85e4ff2976778ff80cb11ce2\": container with ID starting with 53754d8c17042b805d7249144197106d8b64b93e85e4ff2976778ff80cb11ce2 not found: ID does not exist" containerID="53754d8c17042b805d7249144197106d8b64b93e85e4ff2976778ff80cb11ce2" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.931610 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53754d8c17042b805d7249144197106d8b64b93e85e4ff2976778ff80cb11ce2"} err="failed to get container status \"53754d8c17042b805d7249144197106d8b64b93e85e4ff2976778ff80cb11ce2\": rpc error: code = NotFound desc = could not find container \"53754d8c17042b805d7249144197106d8b64b93e85e4ff2976778ff80cb11ce2\": container with ID starting with 53754d8c17042b805d7249144197106d8b64b93e85e4ff2976778ff80cb11ce2 not found: ID does not exist" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.931631 4730 scope.go:117] "RemoveContainer" containerID="1f7d81ec102b99f5fac1bb8182fb6aed2ad16db2cceb57c27c6c17ddb57c4e44" Feb 21 00:10:42 crc kubenswrapper[4730]: E0221 00:10:42.931991 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f7d81ec102b99f5fac1bb8182fb6aed2ad16db2cceb57c27c6c17ddb57c4e44\": container with ID starting with 1f7d81ec102b99f5fac1bb8182fb6aed2ad16db2cceb57c27c6c17ddb57c4e44 not found: ID does not exist" containerID="1f7d81ec102b99f5fac1bb8182fb6aed2ad16db2cceb57c27c6c17ddb57c4e44" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.932038 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f7d81ec102b99f5fac1bb8182fb6aed2ad16db2cceb57c27c6c17ddb57c4e44"} err="failed to get container status \"1f7d81ec102b99f5fac1bb8182fb6aed2ad16db2cceb57c27c6c17ddb57c4e44\": rpc error: code = NotFound desc = could not find container \"1f7d81ec102b99f5fac1bb8182fb6aed2ad16db2cceb57c27c6c17ddb57c4e44\": container with ID starting with 1f7d81ec102b99f5fac1bb8182fb6aed2ad16db2cceb57c27c6c17ddb57c4e44 not found: ID does not exist" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.932068 4730 scope.go:117] "RemoveContainer" containerID="fc0436bf3370a77ae3703fb5bf0f2a2af335e5cc15b8c6cef755e99763f6b71b" Feb 21 00:10:42 crc kubenswrapper[4730]: E0221 00:10:42.932325 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc0436bf3370a77ae3703fb5bf0f2a2af335e5cc15b8c6cef755e99763f6b71b\": container with ID starting with fc0436bf3370a77ae3703fb5bf0f2a2af335e5cc15b8c6cef755e99763f6b71b not found: ID does not exist" containerID="fc0436bf3370a77ae3703fb5bf0f2a2af335e5cc15b8c6cef755e99763f6b71b" Feb 21 00:10:42 crc kubenswrapper[4730]: I0221 00:10:42.932349 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0436bf3370a77ae3703fb5bf0f2a2af335e5cc15b8c6cef755e99763f6b71b"} err="failed to get container status \"fc0436bf3370a77ae3703fb5bf0f2a2af335e5cc15b8c6cef755e99763f6b71b\": rpc error: code = NotFound desc = could not find container \"fc0436bf3370a77ae3703fb5bf0f2a2af335e5cc15b8c6cef755e99763f6b71b\": container with ID starting with fc0436bf3370a77ae3703fb5bf0f2a2af335e5cc15b8c6cef755e99763f6b71b not found: ID does not exist" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.686419 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" event={"ID":"69773a41-0e64-40ec-913e-be1b7abff235","Type":"ContainerStarted","Data":"a7544b3750b2d3dbc6c8ee57a5ff4b4db4e012fc77e82b08e7601141a741055d"} Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.688780 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.691227 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.706920 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-snqnv" podStartSLOduration=2.706898819 podStartE2EDuration="2.706898819s" podCreationTimestamp="2026-02-21 00:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:10:43.70634011 +0000 UTC m=+245.717907055" watchObservedRunningTime="2026-02-21 00:10:43.706898819 +0000 UTC m=+245.718465754" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891141 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4xfbv"] Feb 21 00:10:43 crc kubenswrapper[4730]: E0221 00:10:43.891345 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed145f4-5a44-4d9c-8287-a9273b31559a" containerName="extract-utilities" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891360 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed145f4-5a44-4d9c-8287-a9273b31559a" containerName="extract-utilities" Feb 21 00:10:43 crc kubenswrapper[4730]: E0221 00:10:43.891371 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed145f4-5a44-4d9c-8287-a9273b31559a" containerName="extract-content" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891377 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed145f4-5a44-4d9c-8287-a9273b31559a" containerName="extract-content" Feb 21 00:10:43 crc kubenswrapper[4730]: E0221 00:10:43.891386 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afba91ef-7949-490e-9903-0751d7f84d27" containerName="marketplace-operator" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891392 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="afba91ef-7949-490e-9903-0751d7f84d27" containerName="marketplace-operator" Feb 21 00:10:43 crc kubenswrapper[4730]: E0221 00:10:43.891398 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed145f4-5a44-4d9c-8287-a9273b31559a" containerName="registry-server" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891404 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed145f4-5a44-4d9c-8287-a9273b31559a" containerName="registry-server" Feb 21 00:10:43 crc kubenswrapper[4730]: E0221 00:10:43.891415 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406888dc-7d00-47e2-8c63-05e0106525e1" containerName="extract-utilities" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891421 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="406888dc-7d00-47e2-8c63-05e0106525e1" containerName="extract-utilities" Feb 21 00:10:43 crc kubenswrapper[4730]: E0221 00:10:43.891435 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b0a1f8-4598-4fb8-982c-91a3e6699c33" containerName="extract-utilities" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891442 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b0a1f8-4598-4fb8-982c-91a3e6699c33" containerName="extract-utilities" Feb 21 00:10:43 crc kubenswrapper[4730]: E0221 00:10:43.891454 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b0a1f8-4598-4fb8-982c-91a3e6699c33" containerName="extract-content" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891460 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b0a1f8-4598-4fb8-982c-91a3e6699c33" containerName="extract-content" Feb 21 00:10:43 crc kubenswrapper[4730]: E0221 00:10:43.891469 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b0a1f8-4598-4fb8-982c-91a3e6699c33" containerName="registry-server" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891475 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b0a1f8-4598-4fb8-982c-91a3e6699c33" containerName="registry-server" Feb 21 00:10:43 crc kubenswrapper[4730]: E0221 00:10:43.891483 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6298cfe2-11f4-453f-9cfc-63aceb67b191" containerName="extract-content" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891488 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6298cfe2-11f4-453f-9cfc-63aceb67b191" containerName="extract-content" Feb 21 00:10:43 crc kubenswrapper[4730]: E0221 00:10:43.891496 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6298cfe2-11f4-453f-9cfc-63aceb67b191" containerName="registry-server" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891503 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6298cfe2-11f4-453f-9cfc-63aceb67b191" containerName="registry-server" Feb 21 00:10:43 crc kubenswrapper[4730]: E0221 00:10:43.891514 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406888dc-7d00-47e2-8c63-05e0106525e1" containerName="registry-server" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891521 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="406888dc-7d00-47e2-8c63-05e0106525e1" containerName="registry-server" Feb 21 00:10:43 crc kubenswrapper[4730]: E0221 00:10:43.891530 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6298cfe2-11f4-453f-9cfc-63aceb67b191" containerName="extract-utilities" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891538 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6298cfe2-11f4-453f-9cfc-63aceb67b191" containerName="extract-utilities" Feb 21 00:10:43 crc kubenswrapper[4730]: E0221 00:10:43.891547 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406888dc-7d00-47e2-8c63-05e0106525e1" containerName="extract-content" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891554 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="406888dc-7d00-47e2-8c63-05e0106525e1" containerName="extract-content" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891650 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="406888dc-7d00-47e2-8c63-05e0106525e1" containerName="registry-server" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891664 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed145f4-5a44-4d9c-8287-a9273b31559a" containerName="registry-server" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891677 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="afba91ef-7949-490e-9903-0751d7f84d27" containerName="marketplace-operator" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891688 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b0a1f8-4598-4fb8-982c-91a3e6699c33" containerName="registry-server" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.891699 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6298cfe2-11f4-453f-9cfc-63aceb67b191" containerName="registry-server" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.892444 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.897185 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.904777 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xfbv"] Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.978939 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-utilities\") pod \"redhat-marketplace-4xfbv\" (UID: \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\") " pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.978998 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h76j7\" (UniqueName: \"kubernetes.io/projected/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-kube-api-access-h76j7\") pod \"redhat-marketplace-4xfbv\" (UID: \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\") " pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:43 crc kubenswrapper[4730]: I0221 00:10:43.979031 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-catalog-content\") pod \"redhat-marketplace-4xfbv\" (UID: \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\") " pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.081621 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-utilities\") pod \"redhat-marketplace-4xfbv\" (UID: \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\") " pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.081821 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h76j7\" (UniqueName: \"kubernetes.io/projected/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-kube-api-access-h76j7\") pod \"redhat-marketplace-4xfbv\" (UID: \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\") " pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.082476 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-catalog-content\") pod \"redhat-marketplace-4xfbv\" (UID: \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\") " pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.082611 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-utilities\") pod \"redhat-marketplace-4xfbv\" (UID: \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\") " pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.086619 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-catalog-content\") pod \"redhat-marketplace-4xfbv\" (UID: \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\") " pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.087274 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4nhxt"] Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.088111 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.090523 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.098842 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4nhxt"] Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.111173 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h76j7\" (UniqueName: \"kubernetes.io/projected/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-kube-api-access-h76j7\") pod \"redhat-marketplace-4xfbv\" (UID: \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\") " pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.189155 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a43d1320-32b1-4d4d-a815-263b30821c6a-utilities\") pod \"certified-operators-4nhxt\" (UID: \"a43d1320-32b1-4d4d-a815-263b30821c6a\") " pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.189263 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a43d1320-32b1-4d4d-a815-263b30821c6a-catalog-content\") pod \"certified-operators-4nhxt\" (UID: \"a43d1320-32b1-4d4d-a815-263b30821c6a\") " pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.189360 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkkf9\" (UniqueName: \"kubernetes.io/projected/a43d1320-32b1-4d4d-a815-263b30821c6a-kube-api-access-mkkf9\") pod \"certified-operators-4nhxt\" (UID: \"a43d1320-32b1-4d4d-a815-263b30821c6a\") " pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.212728 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.290109 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a43d1320-32b1-4d4d-a815-263b30821c6a-catalog-content\") pod \"certified-operators-4nhxt\" (UID: \"a43d1320-32b1-4d4d-a815-263b30821c6a\") " pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.290164 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkkf9\" (UniqueName: \"kubernetes.io/projected/a43d1320-32b1-4d4d-a815-263b30821c6a-kube-api-access-mkkf9\") pod \"certified-operators-4nhxt\" (UID: \"a43d1320-32b1-4d4d-a815-263b30821c6a\") " pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.290201 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a43d1320-32b1-4d4d-a815-263b30821c6a-utilities\") pod \"certified-operators-4nhxt\" (UID: \"a43d1320-32b1-4d4d-a815-263b30821c6a\") " pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.290620 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a43d1320-32b1-4d4d-a815-263b30821c6a-utilities\") pod \"certified-operators-4nhxt\" (UID: \"a43d1320-32b1-4d4d-a815-263b30821c6a\") " pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.290815 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a43d1320-32b1-4d4d-a815-263b30821c6a-catalog-content\") pod \"certified-operators-4nhxt\" (UID: \"a43d1320-32b1-4d4d-a815-263b30821c6a\") " pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.310840 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkkf9\" (UniqueName: \"kubernetes.io/projected/a43d1320-32b1-4d4d-a815-263b30821c6a-kube-api-access-mkkf9\") pod \"certified-operators-4nhxt\" (UID: \"a43d1320-32b1-4d4d-a815-263b30821c6a\") " pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.428883 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.597805 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xfbv"] Feb 21 00:10:44 crc kubenswrapper[4730]: W0221 00:10:44.606622 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e7fcfd3_5996_44fa_8ff0_54f6c7fcc067.slice/crio-ec670af7d41ae3dec892d16dde33150fcd7edd253a8b12b47bc5bc3769cd4eb7 WatchSource:0}: Error finding container ec670af7d41ae3dec892d16dde33150fcd7edd253a8b12b47bc5bc3769cd4eb7: Status 404 returned error can't find the container with id ec670af7d41ae3dec892d16dde33150fcd7edd253a8b12b47bc5bc3769cd4eb7 Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.691841 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xfbv" event={"ID":"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067","Type":"ContainerStarted","Data":"ec670af7d41ae3dec892d16dde33150fcd7edd253a8b12b47bc5bc3769cd4eb7"} Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.705234 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406888dc-7d00-47e2-8c63-05e0106525e1" path="/var/lib/kubelet/pods/406888dc-7d00-47e2-8c63-05e0106525e1/volumes" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.706919 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6298cfe2-11f4-453f-9cfc-63aceb67b191" path="/var/lib/kubelet/pods/6298cfe2-11f4-453f-9cfc-63aceb67b191/volumes" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.708155 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b0a1f8-4598-4fb8-982c-91a3e6699c33" path="/var/lib/kubelet/pods/c5b0a1f8-4598-4fb8-982c-91a3e6699c33/volumes" Feb 21 00:10:44 crc kubenswrapper[4730]: I0221 00:10:44.804889 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4nhxt"] Feb 21 00:10:44 crc kubenswrapper[4730]: W0221 00:10:44.810650 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda43d1320_32b1_4d4d_a815_263b30821c6a.slice/crio-be2cbc5a126959b75bfc0a68b5063769afcd5ecf82674243f84ba4fe1ce29340 WatchSource:0}: Error finding container be2cbc5a126959b75bfc0a68b5063769afcd5ecf82674243f84ba4fe1ce29340: Status 404 returned error can't find the container with id be2cbc5a126959b75bfc0a68b5063769afcd5ecf82674243f84ba4fe1ce29340 Feb 21 00:10:45 crc kubenswrapper[4730]: I0221 00:10:45.700820 4730 generic.go:334] "Generic (PLEG): container finished" podID="a43d1320-32b1-4d4d-a815-263b30821c6a" containerID="17ce9279d4265e7222778d10cf9c8fcaebfcebbc51a811fde13ebadd3dd1263a" exitCode=0 Feb 21 00:10:45 crc kubenswrapper[4730]: I0221 00:10:45.700918 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nhxt" event={"ID":"a43d1320-32b1-4d4d-a815-263b30821c6a","Type":"ContainerDied","Data":"17ce9279d4265e7222778d10cf9c8fcaebfcebbc51a811fde13ebadd3dd1263a"} Feb 21 00:10:45 crc kubenswrapper[4730]: I0221 00:10:45.701381 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nhxt" event={"ID":"a43d1320-32b1-4d4d-a815-263b30821c6a","Type":"ContainerStarted","Data":"be2cbc5a126959b75bfc0a68b5063769afcd5ecf82674243f84ba4fe1ce29340"} Feb 21 00:10:45 crc kubenswrapper[4730]: I0221 00:10:45.702891 4730 generic.go:334] "Generic (PLEG): container finished" podID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" containerID="7efad4e9e0e70fa5b2ca8b3303f62f6fd5ba0eb6a979c4ea088387565482a4e0" exitCode=0 Feb 21 00:10:45 crc kubenswrapper[4730]: I0221 00:10:45.703009 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xfbv" event={"ID":"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067","Type":"ContainerDied","Data":"7efad4e9e0e70fa5b2ca8b3303f62f6fd5ba0eb6a979c4ea088387565482a4e0"} Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.291699 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-56pck"] Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.293312 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.296569 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.302609 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56pck"] Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.317455 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a096d4c9-914f-4529-9fd9-6e699e91ab00-utilities\") pod \"redhat-operators-56pck\" (UID: \"a096d4c9-914f-4529-9fd9-6e699e91ab00\") " pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.317728 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a096d4c9-914f-4529-9fd9-6e699e91ab00-catalog-content\") pod \"redhat-operators-56pck\" (UID: \"a096d4c9-914f-4529-9fd9-6e699e91ab00\") " pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.317836 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhlw6\" (UniqueName: \"kubernetes.io/projected/a096d4c9-914f-4529-9fd9-6e699e91ab00-kube-api-access-qhlw6\") pod \"redhat-operators-56pck\" (UID: \"a096d4c9-914f-4529-9fd9-6e699e91ab00\") " pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.418677 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhlw6\" (UniqueName: \"kubernetes.io/projected/a096d4c9-914f-4529-9fd9-6e699e91ab00-kube-api-access-qhlw6\") pod \"redhat-operators-56pck\" (UID: \"a096d4c9-914f-4529-9fd9-6e699e91ab00\") " pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.418723 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a096d4c9-914f-4529-9fd9-6e699e91ab00-utilities\") pod \"redhat-operators-56pck\" (UID: \"a096d4c9-914f-4529-9fd9-6e699e91ab00\") " pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.418801 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a096d4c9-914f-4529-9fd9-6e699e91ab00-catalog-content\") pod \"redhat-operators-56pck\" (UID: \"a096d4c9-914f-4529-9fd9-6e699e91ab00\") " pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.419392 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a096d4c9-914f-4529-9fd9-6e699e91ab00-catalog-content\") pod \"redhat-operators-56pck\" (UID: \"a096d4c9-914f-4529-9fd9-6e699e91ab00\") " pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.419392 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a096d4c9-914f-4529-9fd9-6e699e91ab00-utilities\") pod \"redhat-operators-56pck\" (UID: \"a096d4c9-914f-4529-9fd9-6e699e91ab00\") " pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.436339 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhlw6\" (UniqueName: \"kubernetes.io/projected/a096d4c9-914f-4529-9fd9-6e699e91ab00-kube-api-access-qhlw6\") pod \"redhat-operators-56pck\" (UID: \"a096d4c9-914f-4529-9fd9-6e699e91ab00\") " pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.491384 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lh9g8"] Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.492457 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.495639 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.500169 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lh9g8"] Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.524061 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ccbf5c-7182-4185-b59e-5d43e2fd29c6-utilities\") pod \"community-operators-lh9g8\" (UID: \"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6\") " pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.524150 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkn8b\" (UniqueName: \"kubernetes.io/projected/f2ccbf5c-7182-4185-b59e-5d43e2fd29c6-kube-api-access-fkn8b\") pod \"community-operators-lh9g8\" (UID: \"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6\") " pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.524259 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ccbf5c-7182-4185-b59e-5d43e2fd29c6-catalog-content\") pod \"community-operators-lh9g8\" (UID: \"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6\") " pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.611055 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.624857 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ccbf5c-7182-4185-b59e-5d43e2fd29c6-utilities\") pod \"community-operators-lh9g8\" (UID: \"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6\") " pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.625227 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ccbf5c-7182-4185-b59e-5d43e2fd29c6-utilities\") pod \"community-operators-lh9g8\" (UID: \"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6\") " pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.625271 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkn8b\" (UniqueName: \"kubernetes.io/projected/f2ccbf5c-7182-4185-b59e-5d43e2fd29c6-kube-api-access-fkn8b\") pod \"community-operators-lh9g8\" (UID: \"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6\") " pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.625339 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ccbf5c-7182-4185-b59e-5d43e2fd29c6-catalog-content\") pod \"community-operators-lh9g8\" (UID: \"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6\") " pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.625580 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ccbf5c-7182-4185-b59e-5d43e2fd29c6-catalog-content\") pod \"community-operators-lh9g8\" (UID: \"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6\") " pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.652094 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkn8b\" (UniqueName: \"kubernetes.io/projected/f2ccbf5c-7182-4185-b59e-5d43e2fd29c6-kube-api-access-fkn8b\") pod \"community-operators-lh9g8\" (UID: \"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6\") " pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.713551 4730 generic.go:334] "Generic (PLEG): container finished" podID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" containerID="d7f95bf88a2621a491fd84240851d5ff4a1c99c82179ceff16326f5b40123293" exitCode=0 Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.713672 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xfbv" event={"ID":"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067","Type":"ContainerDied","Data":"d7f95bf88a2621a491fd84240851d5ff4a1c99c82179ceff16326f5b40123293"} Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.716553 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nhxt" event={"ID":"a43d1320-32b1-4d4d-a815-263b30821c6a","Type":"ContainerStarted","Data":"aa7f8fa50d5dac69e44be67fa2dfacb17e9151909d92536efafeef991261e3e9"} Feb 21 00:10:46 crc kubenswrapper[4730]: I0221 00:10:46.809572 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.001836 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56pck"] Feb 21 00:10:47 crc kubenswrapper[4730]: W0221 00:10:47.004609 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda096d4c9_914f_4529_9fd9_6e699e91ab00.slice/crio-ea4bd0585cae679d64efae5da6cb73abeae0fb4dacace710d41c12e6618ca30a WatchSource:0}: Error finding container ea4bd0585cae679d64efae5da6cb73abeae0fb4dacace710d41c12e6618ca30a: Status 404 returned error can't find the container with id ea4bd0585cae679d64efae5da6cb73abeae0fb4dacace710d41c12e6618ca30a Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.170879 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lh9g8"] Feb 21 00:10:47 crc kubenswrapper[4730]: W0221 00:10:47.229768 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ccbf5c_7182_4185_b59e_5d43e2fd29c6.slice/crio-fa9f9f0ecd40a38139216e7656bf00a95bff5830476aecd34dcd01e8007b06f6 WatchSource:0}: Error finding container fa9f9f0ecd40a38139216e7656bf00a95bff5830476aecd34dcd01e8007b06f6: Status 404 returned error can't find the container with id fa9f9f0ecd40a38139216e7656bf00a95bff5830476aecd34dcd01e8007b06f6 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.464959 4730 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.465678 4730 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.465922 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08" gracePeriod=15 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.466093 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027" gracePeriod=15 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.466117 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82" gracePeriod=15 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.466150 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.466158 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2" gracePeriod=15 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.466158 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993" gracePeriod=15 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467147 4730 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 21 00:10:47 crc kubenswrapper[4730]: E0221 00:10:47.467290 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467303 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 21 00:10:47 crc kubenswrapper[4730]: E0221 00:10:47.467313 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467319 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 21 00:10:47 crc kubenswrapper[4730]: E0221 00:10:47.467329 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467335 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 21 00:10:47 crc kubenswrapper[4730]: E0221 00:10:47.467345 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467351 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 21 00:10:47 crc kubenswrapper[4730]: E0221 00:10:47.467359 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467366 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 00:10:47 crc kubenswrapper[4730]: E0221 00:10:47.467375 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467381 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467466 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467476 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467484 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467493 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467500 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 21 00:10:47 crc kubenswrapper[4730]: E0221 00:10:47.467579 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467586 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.467690 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 00:10:47 crc kubenswrapper[4730]: E0221 00:10:47.503592 4730 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.539764 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.539822 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.539868 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.539893 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.539929 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.539961 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.540023 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.540052 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.640744 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.640819 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.640846 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.640883 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.640903 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.640962 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.640922 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.640996 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.641012 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.641021 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.641083 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.641110 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.641158 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.641185 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.641218 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.641271 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.725763 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.727956 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.728732 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027" exitCode=0 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.728753 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2" exitCode=0 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.728761 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82" exitCode=0 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.728769 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993" exitCode=2 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.728833 4730 scope.go:117] "RemoveContainer" containerID="39556d7825e11169cebbb5f0d58456896e409e67dd705ccefc5222c4fb95983a" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.731262 4730 generic.go:334] "Generic (PLEG): container finished" podID="a43d1320-32b1-4d4d-a815-263b30821c6a" containerID="aa7f8fa50d5dac69e44be67fa2dfacb17e9151909d92536efafeef991261e3e9" exitCode=0 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.731315 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nhxt" event={"ID":"a43d1320-32b1-4d4d-a815-263b30821c6a","Type":"ContainerDied","Data":"aa7f8fa50d5dac69e44be67fa2dfacb17e9151909d92536efafeef991261e3e9"} Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.731919 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.732099 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: E0221 00:10:47.733116 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-4nhxt.18961a81b09691fd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-4nhxt,UID:a43d1320-32b1-4d4d-a815-263b30821c6a,APIVersion:v1,ResourceVersion:30048,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-21 00:10:47.732630013 +0000 UTC m=+249.744196948,LastTimestamp:2026-02-21 00:10:47.732630013 +0000 UTC m=+249.744196948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.733477 4730 generic.go:334] "Generic (PLEG): container finished" podID="a096d4c9-914f-4529-9fd9-6e699e91ab00" containerID="d6456c148feb10e4ab3bcf58157a198fb30082d62c25bede2f415699b8bdb319" exitCode=0 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.733547 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56pck" event={"ID":"a096d4c9-914f-4529-9fd9-6e699e91ab00","Type":"ContainerDied","Data":"d6456c148feb10e4ab3bcf58157a198fb30082d62c25bede2f415699b8bdb319"} Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.733570 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56pck" event={"ID":"a096d4c9-914f-4529-9fd9-6e699e91ab00","Type":"ContainerStarted","Data":"ea4bd0585cae679d64efae5da6cb73abeae0fb4dacace710d41c12e6618ca30a"} Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.734408 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.734805 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.735028 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.736230 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xfbv" event={"ID":"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067","Type":"ContainerStarted","Data":"2692859bfd18689854990e4ec11ba855f44f88508c12da5825d8036390c7a2e4"} Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.736854 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.737203 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.738022 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.738270 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.739883 4730 generic.go:334] "Generic (PLEG): container finished" podID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" containerID="b4457ff1ab2b06ea6ef940257c5e9ebfe75cf92b1f2a0b3be0e1b534af0b9660" exitCode=0 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.739959 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh9g8" event={"ID":"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6","Type":"ContainerDied","Data":"b4457ff1ab2b06ea6ef940257c5e9ebfe75cf92b1f2a0b3be0e1b534af0b9660"} Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.739990 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh9g8" event={"ID":"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6","Type":"ContainerStarted","Data":"fa9f9f0ecd40a38139216e7656bf00a95bff5830476aecd34dcd01e8007b06f6"} Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.742553 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.743241 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.743462 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.743650 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.743870 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.744101 4730 generic.go:334] "Generic (PLEG): container finished" podID="e062c181-c9d3-4cee-8284-27620adeafb0" containerID="764144ab43620b20a91df266d1d0ca8456646cdd9dfea0751a905ad4a574c9f7" exitCode=0 Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.744139 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e062c181-c9d3-4cee-8284-27620adeafb0","Type":"ContainerDied","Data":"764144ab43620b20a91df266d1d0ca8456646cdd9dfea0751a905ad4a574c9f7"} Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.744599 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.744823 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.745060 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.745242 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.745416 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.745601 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:47 crc kubenswrapper[4730]: I0221 00:10:47.804427 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:47 crc kubenswrapper[4730]: W0221 00:10:47.823068 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-75b02971fac9dddeb686f88dcdcb6935d29b572641e2778e1c9f7309b5342d62 WatchSource:0}: Error finding container 75b02971fac9dddeb686f88dcdcb6935d29b572641e2778e1c9f7309b5342d62: Status 404 returned error can't find the container with id 75b02971fac9dddeb686f88dcdcb6935d29b572641e2778e1c9f7309b5342d62 Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.698377 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.699066 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.699351 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.699839 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.700226 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.700464 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: E0221 00:10:48.733706 4730 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" volumeName="registry-storage" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.750258 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh9g8" event={"ID":"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6","Type":"ContainerStarted","Data":"7db438e953d5ab447da0c8914f237d39c292cc5825bdfec0777b4963e6b32f53"} Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.750867 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.751082 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.751295 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.751459 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.751623 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.751809 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"05f2d169b5d6b21d3bcb137e011152795d4f144acc4f614b2ea18f57f8936dff"} Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.751969 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"75b02971fac9dddeb686f88dcdcb6935d29b572641e2778e1c9f7309b5342d62"} Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.752353 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: E0221 00:10:48.752617 4730 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.752671 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.753137 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.753378 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.753593 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.753713 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nhxt" event={"ID":"a43d1320-32b1-4d4d-a815-263b30821c6a","Type":"ContainerStarted","Data":"0a8b01da7e44fdb4fb5a424267ccf2aba2b4be53c7c4b1de683ffc2ab1264992"} Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.754162 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.754403 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.754617 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.754824 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.755050 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.757052 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.759317 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56pck" event={"ID":"a096d4c9-914f-4529-9fd9-6e699e91ab00","Type":"ContainerStarted","Data":"681f706f4c91a7dfd59b2b77f927a745405e2e992601919349d3e225dd15fa9a"} Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.760046 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.760298 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.760508 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.760714 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.760897 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: E0221 00:10:48.877565 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: E0221 00:10:48.878451 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: E0221 00:10:48.880176 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: E0221 00:10:48.880562 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: E0221 00:10:48.881010 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:48 crc kubenswrapper[4730]: I0221 00:10:48.881067 4730 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 21 00:10:48 crc kubenswrapper[4730]: E0221 00:10:48.881385 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="200ms" Feb 21 00:10:49 crc kubenswrapper[4730]: E0221 00:10:49.086443 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="400ms" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.118003 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.118565 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.118906 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.119374 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.119563 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.119755 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.161979 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e062c181-c9d3-4cee-8284-27620adeafb0-kube-api-access\") pod \"e062c181-c9d3-4cee-8284-27620adeafb0\" (UID: \"e062c181-c9d3-4cee-8284-27620adeafb0\") " Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.162103 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e062c181-c9d3-4cee-8284-27620adeafb0-var-lock\") pod \"e062c181-c9d3-4cee-8284-27620adeafb0\" (UID: \"e062c181-c9d3-4cee-8284-27620adeafb0\") " Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.162121 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e062c181-c9d3-4cee-8284-27620adeafb0-kubelet-dir\") pod \"e062c181-c9d3-4cee-8284-27620adeafb0\" (UID: \"e062c181-c9d3-4cee-8284-27620adeafb0\") " Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.162364 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e062c181-c9d3-4cee-8284-27620adeafb0-var-lock" (OuterVolumeSpecName: "var-lock") pod "e062c181-c9d3-4cee-8284-27620adeafb0" (UID: "e062c181-c9d3-4cee-8284-27620adeafb0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.162464 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e062c181-c9d3-4cee-8284-27620adeafb0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e062c181-c9d3-4cee-8284-27620adeafb0" (UID: "e062c181-c9d3-4cee-8284-27620adeafb0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.162765 4730 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e062c181-c9d3-4cee-8284-27620adeafb0-var-lock\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.162787 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e062c181-c9d3-4cee-8284-27620adeafb0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.170205 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e062c181-c9d3-4cee-8284-27620adeafb0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e062c181-c9d3-4cee-8284-27620adeafb0" (UID: "e062c181-c9d3-4cee-8284-27620adeafb0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.264280 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e062c181-c9d3-4cee-8284-27620adeafb0-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:49 crc kubenswrapper[4730]: E0221 00:10:49.487900 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="800ms" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.766749 4730 generic.go:334] "Generic (PLEG): container finished" podID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" containerID="7db438e953d5ab447da0c8914f237d39c292cc5825bdfec0777b4963e6b32f53" exitCode=0 Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.766984 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh9g8" event={"ID":"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6","Type":"ContainerDied","Data":"7db438e953d5ab447da0c8914f237d39c292cc5825bdfec0777b4963e6b32f53"} Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.767673 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.767894 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.768084 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.768216 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.768394 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.768578 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e062c181-c9d3-4cee-8284-27620adeafb0","Type":"ContainerDied","Data":"e5dad1220e7ec3aba778be566999fab8fcf3ce790468910d22910f14589646ad"} Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.768603 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5dad1220e7ec3aba778be566999fab8fcf3ce790468910d22910f14589646ad" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.768692 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.771190 4730 generic.go:334] "Generic (PLEG): container finished" podID="a096d4c9-914f-4529-9fd9-6e699e91ab00" containerID="681f706f4c91a7dfd59b2b77f927a745405e2e992601919349d3e225dd15fa9a" exitCode=0 Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.771924 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56pck" event={"ID":"a096d4c9-914f-4529-9fd9-6e699e91ab00","Type":"ContainerDied","Data":"681f706f4c91a7dfd59b2b77f927a745405e2e992601919349d3e225dd15fa9a"} Feb 21 00:10:49 crc kubenswrapper[4730]: E0221 00:10:49.772612 4730 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.772854 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.773067 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.773208 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.773346 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.773664 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.824357 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.825040 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.825366 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.825706 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.826101 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.828881 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.829555 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.829994 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.830255 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.830445 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.830624 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.830792 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.830971 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.869284 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.869405 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.869396 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.869429 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.869463 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.869556 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.870459 4730 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.870573 4730 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:49 crc kubenswrapper[4730]: I0221 00:10:49.870651 4730 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 21 00:10:50 crc kubenswrapper[4730]: E0221 00:10:50.289931 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="1.6s" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.702387 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.780229 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56pck" event={"ID":"a096d4c9-914f-4529-9fd9-6e699e91ab00","Type":"ContainerStarted","Data":"5d860b8fd7745f15c01aaf4ecc51e770a41a98ce10be95adca387c1ba3cfd1a5"} Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.781134 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.781277 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.781418 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.781662 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.781812 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.783442 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh9g8" event={"ID":"f2ccbf5c-7182-4185-b59e-5d43e2fd29c6","Type":"ContainerStarted","Data":"c15b050d596d0467975b9defded4113db29c53f9bfea2383c0106e1e105036ba"} Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.784506 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.784867 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.785774 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.786214 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.790067 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.791920 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.792678 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08" exitCode=0 Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.792723 4730 scope.go:117] "RemoveContainer" containerID="7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.792783 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.794148 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.794412 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.794656 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.794904 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.795117 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.795328 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.796107 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.796320 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.796585 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.796819 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.797045 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.797319 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.818964 4730 scope.go:117] "RemoveContainer" containerID="3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.841404 4730 scope.go:117] "RemoveContainer" containerID="31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.853056 4730 scope.go:117] "RemoveContainer" containerID="059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.867521 4730 scope.go:117] "RemoveContainer" containerID="02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.881623 4730 scope.go:117] "RemoveContainer" containerID="23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.897683 4730 scope.go:117] "RemoveContainer" containerID="7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027" Feb 21 00:10:50 crc kubenswrapper[4730]: E0221 00:10:50.903327 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\": container with ID starting with 7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027 not found: ID does not exist" containerID="7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.903373 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027"} err="failed to get container status \"7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\": rpc error: code = NotFound desc = could not find container \"7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027\": container with ID starting with 7d2ee19109f6359d6995d3134ceaac9ac5f6c4d8f8c17a2048322fba8bb1d027 not found: ID does not exist" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.903441 4730 scope.go:117] "RemoveContainer" containerID="3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2" Feb 21 00:10:50 crc kubenswrapper[4730]: E0221 00:10:50.903846 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\": container with ID starting with 3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2 not found: ID does not exist" containerID="3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.903876 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2"} err="failed to get container status \"3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\": rpc error: code = NotFound desc = could not find container \"3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2\": container with ID starting with 3a0a4f386ab4d717e2e3a3ea5f25553038cad3937a3b5ebcced5ad03b81f8bd2 not found: ID does not exist" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.903894 4730 scope.go:117] "RemoveContainer" containerID="31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82" Feb 21 00:10:50 crc kubenswrapper[4730]: E0221 00:10:50.904171 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\": container with ID starting with 31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82 not found: ID does not exist" containerID="31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.904215 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82"} err="failed to get container status \"31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\": rpc error: code = NotFound desc = could not find container \"31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82\": container with ID starting with 31b1bd965e9bd4d840d4fd5f1918d44fbcc70a7808bd6b5715b5400b26f2af82 not found: ID does not exist" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.904242 4730 scope.go:117] "RemoveContainer" containerID="059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993" Feb 21 00:10:50 crc kubenswrapper[4730]: E0221 00:10:50.904600 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\": container with ID starting with 059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993 not found: ID does not exist" containerID="059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.904621 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993"} err="failed to get container status \"059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\": rpc error: code = NotFound desc = could not find container \"059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993\": container with ID starting with 059d5f958e04b2fe77723b6a4b2df2390d54e8219918bcf432d3d20ef77d9993 not found: ID does not exist" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.904642 4730 scope.go:117] "RemoveContainer" containerID="02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08" Feb 21 00:10:50 crc kubenswrapper[4730]: E0221 00:10:50.904834 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\": container with ID starting with 02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08 not found: ID does not exist" containerID="02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.904854 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08"} err="failed to get container status \"02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\": rpc error: code = NotFound desc = could not find container \"02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08\": container with ID starting with 02a77110bc907a91d7d5dea5f3eee19888567526952e9726198d7b038ce2ab08 not found: ID does not exist" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.904865 4730 scope.go:117] "RemoveContainer" containerID="23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc" Feb 21 00:10:50 crc kubenswrapper[4730]: E0221 00:10:50.905076 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\": container with ID starting with 23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc not found: ID does not exist" containerID="23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc" Feb 21 00:10:50 crc kubenswrapper[4730]: I0221 00:10:50.905099 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc"} err="failed to get container status \"23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\": rpc error: code = NotFound desc = could not find container \"23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc\": container with ID starting with 23b6403ac588bfb2debd772e9e3b1a072c309b73f9241a679b16031e7239b5fc not found: ID does not exist" Feb 21 00:10:51 crc kubenswrapper[4730]: E0221 00:10:51.013258 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Feb 21 00:10:51 crc kubenswrapper[4730]: E0221 00:10:51.893624 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="3.2s" Feb 21 00:10:53 crc kubenswrapper[4730]: E0221 00:10:53.065143 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-4nhxt.18961a81b09691fd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-4nhxt,UID:a43d1320-32b1-4d4d-a815-263b30821c6a,APIVersion:v1,ResourceVersion:30048,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-21 00:10:47.732630013 +0000 UTC m=+249.744196948,LastTimestamp:2026-02-21 00:10:47.732630013 +0000 UTC m=+249.744196948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.212911 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.213299 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.267860 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.268477 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.268724 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.269157 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.269878 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.270150 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.429904 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.429995 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.471109 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.471776 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.472087 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.472395 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.472666 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.472909 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.855729 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4nhxt" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.856151 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.856334 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.856660 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.856962 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.857245 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.859045 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.859416 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.859734 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.860043 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.860236 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:54 crc kubenswrapper[4730]: I0221 00:10:54.860565 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:55 crc kubenswrapper[4730]: E0221 00:10:55.095178 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="6.4s" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.611846 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.611917 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.651548 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.652145 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.652659 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.653272 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.653561 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.653863 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.810591 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.810905 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.869693 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.870286 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.870723 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.871316 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.871347 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56pck" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.871609 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.871971 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.872321 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.872638 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.872876 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.873176 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:56 crc kubenswrapper[4730]: I0221 00:10:56.873501 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.692697 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.693501 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.693890 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.694448 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.694770 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.695113 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.705750 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36b7d0c1-8747-4652-a573-2e8f0e1cb6e4" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.705781 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36b7d0c1-8747-4652-a573-2e8f0e1cb6e4" Feb 21 00:10:57 crc kubenswrapper[4730]: E0221 00:10:57.706168 4730 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.706769 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:57 crc kubenswrapper[4730]: W0221 00:10:57.725567 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-53b9f3131ef4573f97d5eb869a13bad90add56fa92a2c2136935fa8e698ca115 WatchSource:0}: Error finding container 53b9f3131ef4573f97d5eb869a13bad90add56fa92a2c2136935fa8e698ca115: Status 404 returned error can't find the container with id 53b9f3131ef4573f97d5eb869a13bad90add56fa92a2c2136935fa8e698ca115 Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.827986 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"53b9f3131ef4573f97d5eb869a13bad90add56fa92a2c2136935fa8e698ca115"} Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.891292 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lh9g8" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.891726 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.892042 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.892235 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.892423 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:57 crc kubenswrapper[4730]: I0221 00:10:57.892621 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.700025 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.700785 4730 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.701168 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.701468 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.701769 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.702049 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.833511 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9ebc46498120b8abd9c1cdf561d93ba3e8b616ec4ce261d80f1c5545c34d2cca"} Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.833864 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36b7d0c1-8747-4652-a573-2e8f0e1cb6e4" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.833889 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36b7d0c1-8747-4652-a573-2e8f0e1cb6e4" Feb 21 00:10:58 crc kubenswrapper[4730]: E0221 00:10:58.834274 4730 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.834311 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.834693 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.834975 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.835282 4730 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.835640 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:58 crc kubenswrapper[4730]: I0221 00:10:58.835901 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:59 crc kubenswrapper[4730]: I0221 00:10:59.841797 4730 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="9ebc46498120b8abd9c1cdf561d93ba3e8b616ec4ce261d80f1c5545c34d2cca" exitCode=0 Feb 21 00:10:59 crc kubenswrapper[4730]: I0221 00:10:59.841932 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"9ebc46498120b8abd9c1cdf561d93ba3e8b616ec4ce261d80f1c5545c34d2cca"} Feb 21 00:10:59 crc kubenswrapper[4730]: I0221 00:10:59.842058 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36b7d0c1-8747-4652-a573-2e8f0e1cb6e4" Feb 21 00:10:59 crc kubenswrapper[4730]: I0221 00:10:59.842671 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36b7d0c1-8747-4652-a573-2e8f0e1cb6e4" Feb 21 00:10:59 crc kubenswrapper[4730]: I0221 00:10:59.842826 4730 status_manager.go:851] "Failed to get status for pod" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" pod="openshift-marketplace/redhat-marketplace-4xfbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4xfbv\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:59 crc kubenswrapper[4730]: I0221 00:10:59.843338 4730 status_manager.go:851] "Failed to get status for pod" podUID="a096d4c9-914f-4529-9fd9-6e699e91ab00" pod="openshift-marketplace/redhat-operators-56pck" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-56pck\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:59 crc kubenswrapper[4730]: E0221 00:10:59.843576 4730 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:10:59 crc kubenswrapper[4730]: I0221 00:10:59.843804 4730 status_manager.go:851] "Failed to get status for pod" podUID="f2ccbf5c-7182-4185-b59e-5d43e2fd29c6" pod="openshift-marketplace/community-operators-lh9g8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lh9g8\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:59 crc kubenswrapper[4730]: I0221 00:10:59.844170 4730 status_manager.go:851] "Failed to get status for pod" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:59 crc kubenswrapper[4730]: I0221 00:10:59.844469 4730 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:10:59 crc kubenswrapper[4730]: I0221 00:10:59.844679 4730 status_manager.go:851] "Failed to get status for pod" podUID="a43d1320-32b1-4d4d-a815-263b30821c6a" pod="openshift-marketplace/certified-operators-4nhxt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4nhxt\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 21 00:11:00 crc kubenswrapper[4730]: I0221 00:11:00.849232 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6de4dccd4ae70f0aa49710b7c106d11f8ca5a6af685ce9bfce1f292b58aa36e7"} Feb 21 00:11:00 crc kubenswrapper[4730]: I0221 00:11:00.849854 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cffc840ce72c41a13ef4dffcaf53ac34d81692d6a2af067af98a69a921abcd3f"} Feb 21 00:11:00 crc kubenswrapper[4730]: I0221 00:11:00.849869 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6638c8d81399c8a589ec666491fd532785d4cbe89c0e38b5cc87a68df5b780d4"} Feb 21 00:11:00 crc kubenswrapper[4730]: I0221 00:11:00.849880 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e5ca14d9a48049c0077b7bee3ea020fc001d1ed2563e7d54ea7d1546ba8ecea1"} Feb 21 00:11:01 crc kubenswrapper[4730]: I0221 00:11:01.856335 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d664ab5874ed07c207adf5b5fc9c6198325939ac7ef7c38787638936680a2604"} Feb 21 00:11:01 crc kubenswrapper[4730]: I0221 00:11:01.856526 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:11:01 crc kubenswrapper[4730]: I0221 00:11:01.856624 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36b7d0c1-8747-4652-a573-2e8f0e1cb6e4" Feb 21 00:11:01 crc kubenswrapper[4730]: I0221 00:11:01.856648 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36b7d0c1-8747-4652-a573-2e8f0e1cb6e4" Feb 21 00:11:02 crc kubenswrapper[4730]: I0221 00:11:02.707286 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:11:02 crc kubenswrapper[4730]: I0221 00:11:02.707463 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:11:02 crc kubenswrapper[4730]: I0221 00:11:02.714151 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]log ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]etcd ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/generic-apiserver-start-informers ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/priority-and-fairness-filter ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/start-apiextensions-informers ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/start-apiextensions-controllers ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/crd-informer-synced ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/start-system-namespaces-controller ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 21 00:11:02 crc kubenswrapper[4730]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 21 00:11:02 crc kubenswrapper[4730]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/bootstrap-controller ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/start-kube-aggregator-informers ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/apiservice-registration-controller ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/apiservice-discovery-controller ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]autoregister-completion ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/apiservice-openapi-controller ok Feb 21 00:11:02 crc kubenswrapper[4730]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 21 00:11:02 crc kubenswrapper[4730]: livez check failed Feb 21 00:11:02 crc kubenswrapper[4730]: I0221 00:11:02.714228 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 00:11:02 crc kubenswrapper[4730]: I0221 00:11:02.870136 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 21 00:11:02 crc kubenswrapper[4730]: I0221 00:11:02.870626 4730 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f" exitCode=1 Feb 21 00:11:02 crc kubenswrapper[4730]: I0221 00:11:02.870739 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f"} Feb 21 00:11:02 crc kubenswrapper[4730]: I0221 00:11:02.871325 4730 scope.go:117] "RemoveContainer" containerID="71c33c85bd0d060cdebabaef93ee6ba4f9704865f35d5e58b03c4a091630056f" Feb 21 00:11:03 crc kubenswrapper[4730]: I0221 00:11:03.527838 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:11:03 crc kubenswrapper[4730]: I0221 00:11:03.879369 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 21 00:11:03 crc kubenswrapper[4730]: I0221 00:11:03.879436 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3fe7d4deabe4616c3e5fd01d3aaaee6300b8908abdd471cc8dfdd399ac89eea2"} Feb 21 00:11:06 crc kubenswrapper[4730]: I0221 00:11:06.873145 4730 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:11:06 crc kubenswrapper[4730]: I0221 00:11:06.895145 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36b7d0c1-8747-4652-a573-2e8f0e1cb6e4" Feb 21 00:11:06 crc kubenswrapper[4730]: I0221 00:11:06.895178 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36b7d0c1-8747-4652-a573-2e8f0e1cb6e4" Feb 21 00:11:06 crc kubenswrapper[4730]: I0221 00:11:06.903795 4730 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6983cd1f-6f4f-4fff-a1ef-3eebd65997e4" Feb 21 00:11:07 crc kubenswrapper[4730]: I0221 00:11:07.613151 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:11:07 crc kubenswrapper[4730]: I0221 00:11:07.617770 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:11:07 crc kubenswrapper[4730]: I0221 00:11:07.902654 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:11:13 crc kubenswrapper[4730]: I0221 00:11:13.531918 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 00:11:18 crc kubenswrapper[4730]: I0221 00:11:18.014482 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 21 00:11:18 crc kubenswrapper[4730]: I0221 00:11:18.453371 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 21 00:11:18 crc kubenswrapper[4730]: I0221 00:11:18.571027 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 21 00:11:18 crc kubenswrapper[4730]: I0221 00:11:18.803783 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 21 00:11:18 crc kubenswrapper[4730]: I0221 00:11:18.941290 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 21 00:11:19 crc kubenswrapper[4730]: I0221 00:11:19.162763 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 21 00:11:19 crc kubenswrapper[4730]: I0221 00:11:19.471657 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 21 00:11:19 crc kubenswrapper[4730]: I0221 00:11:19.600914 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 21 00:11:19 crc kubenswrapper[4730]: I0221 00:11:19.768119 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 21 00:11:19 crc kubenswrapper[4730]: I0221 00:11:19.796070 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 21 00:11:19 crc kubenswrapper[4730]: I0221 00:11:19.852918 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 21 00:11:19 crc kubenswrapper[4730]: I0221 00:11:19.877337 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 21 00:11:19 crc kubenswrapper[4730]: I0221 00:11:19.895564 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 21 00:11:20 crc kubenswrapper[4730]: I0221 00:11:20.023182 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 21 00:11:20 crc kubenswrapper[4730]: I0221 00:11:20.096586 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 21 00:11:20 crc kubenswrapper[4730]: I0221 00:11:20.327906 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 21 00:11:20 crc kubenswrapper[4730]: I0221 00:11:20.362179 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 21 00:11:20 crc kubenswrapper[4730]: I0221 00:11:20.473674 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 21 00:11:20 crc kubenswrapper[4730]: I0221 00:11:20.473924 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 21 00:11:20 crc kubenswrapper[4730]: I0221 00:11:20.484529 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 21 00:11:20 crc kubenswrapper[4730]: I0221 00:11:20.488531 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 21 00:11:20 crc kubenswrapper[4730]: I0221 00:11:20.572580 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 21 00:11:20 crc kubenswrapper[4730]: I0221 00:11:20.656609 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 21 00:11:20 crc kubenswrapper[4730]: I0221 00:11:20.670623 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 21 00:11:20 crc kubenswrapper[4730]: I0221 00:11:20.775776 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 21 00:11:21 crc kubenswrapper[4730]: I0221 00:11:21.049563 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 21 00:11:21 crc kubenswrapper[4730]: I0221 00:11:21.085807 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 21 00:11:21 crc kubenswrapper[4730]: I0221 00:11:21.088443 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 21 00:11:21 crc kubenswrapper[4730]: I0221 00:11:21.119330 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 21 00:11:21 crc kubenswrapper[4730]: I0221 00:11:21.137999 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 21 00:11:21 crc kubenswrapper[4730]: I0221 00:11:21.167316 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 21 00:11:21 crc kubenswrapper[4730]: I0221 00:11:21.221546 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 21 00:11:21 crc kubenswrapper[4730]: I0221 00:11:21.477772 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 21 00:11:21 crc kubenswrapper[4730]: I0221 00:11:21.571651 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 21 00:11:21 crc kubenswrapper[4730]: I0221 00:11:21.576037 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 21 00:11:21 crc kubenswrapper[4730]: I0221 00:11:21.760583 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.014039 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.015968 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.022695 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.026014 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.026160 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.028101 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.030673 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.047510 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.129283 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.218244 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.228390 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.351660 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.443970 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.446155 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.496915 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.509876 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.654264 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.772460 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.867602 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.878846 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.885721 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.894389 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.949182 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 21 00:11:22 crc kubenswrapper[4730]: I0221 00:11:22.951735 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.127812 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.181473 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.243873 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.288923 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.412244 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.539327 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.558789 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.568449 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.585468 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.611822 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.637425 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.759275 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.811306 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.864076 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.891265 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 21 00:11:23 crc kubenswrapper[4730]: I0221 00:11:23.968981 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.095034 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.273034 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.287897 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.401860 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.536377 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.661537 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.666274 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.718764 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.727533 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.745407 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.753213 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.766139 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.783137 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.833405 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.834501 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.878542 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.910637 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.969159 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.984912 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 21 00:11:24 crc kubenswrapper[4730]: I0221 00:11:24.989903 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.031617 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.141418 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.170891 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.176713 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.211482 4730 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.317103 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.323125 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.329891 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.373032 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.405524 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.514607 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.534420 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.604700 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.613718 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.662099 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.668470 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.671523 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.713719 4730 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.731002 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.740700 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.748135 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.755256 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.811188 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.874865 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.889651 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.895823 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 21 00:11:25 crc kubenswrapper[4730]: I0221 00:11:25.897924 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.114117 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.179519 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.198188 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.218188 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.264756 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.363918 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.420597 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.450812 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.458692 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.471917 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.473115 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.503357 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.529215 4730 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.559466 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.573266 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.618269 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.696907 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.735837 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.807727 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 21 00:11:26 crc kubenswrapper[4730]: I0221 00:11:26.840578 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.001265 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.001483 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.003430 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.005596 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.041593 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.320263 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.335728 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.350474 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.353506 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.401250 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.444535 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.480827 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.505555 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.599073 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.607176 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.663087 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.691874 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.701216 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.818835 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 21 00:11:27 crc kubenswrapper[4730]: I0221 00:11:27.959063 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.013080 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.073719 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.097624 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.185630 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.209668 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.268064 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.448570 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.480015 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.560817 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.568775 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.620248 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.628270 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.666925 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.670043 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.816340 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.876288 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.916570 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.953881 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 21 00:11:28 crc kubenswrapper[4730]: I0221 00:11:28.958492 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.028473 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.037438 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.054609 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.056186 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.065981 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.092256 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.134748 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.148383 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.171908 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.202788 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.235187 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.321336 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.378025 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.404190 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.444155 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.470855 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.566560 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.643995 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.679768 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.701902 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.750429 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.833692 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 21 00:11:29 crc kubenswrapper[4730]: I0221 00:11:29.888692 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.051100 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.083803 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.150628 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.224570 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.236894 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.304113 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.304183 4730 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.351886 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.368155 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.468621 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.692309 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.711419 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.767542 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.790125 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.879410 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.879431 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.886809 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 21 00:11:30 crc kubenswrapper[4730]: I0221 00:11:30.984491 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.034018 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.051835 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.145206 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.214595 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.396009 4730 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.396325 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-56pck" podStartSLOduration=42.952685028 podStartE2EDuration="45.39631346s" podCreationTimestamp="2026-02-21 00:10:46 +0000 UTC" firstStartedPulling="2026-02-21 00:10:47.741911418 +0000 UTC m=+249.753478353" lastFinishedPulling="2026-02-21 00:10:50.18553985 +0000 UTC m=+252.197106785" observedRunningTime="2026-02-21 00:11:06.973032577 +0000 UTC m=+268.984599512" watchObservedRunningTime="2026-02-21 00:11:31.39631346 +0000 UTC m=+293.407880395" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.396606 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4nhxt" podStartSLOduration=44.951256017 podStartE2EDuration="47.396602418s" podCreationTimestamp="2026-02-21 00:10:44 +0000 UTC" firstStartedPulling="2026-02-21 00:10:45.702619074 +0000 UTC m=+247.714186019" lastFinishedPulling="2026-02-21 00:10:48.147965485 +0000 UTC m=+250.159532420" observedRunningTime="2026-02-21 00:11:06.943628923 +0000 UTC m=+268.955195858" watchObservedRunningTime="2026-02-21 00:11:31.396602418 +0000 UTC m=+293.408169353" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.396812 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4xfbv" podStartSLOduration=46.961498188 podStartE2EDuration="48.396808833s" podCreationTimestamp="2026-02-21 00:10:43 +0000 UTC" firstStartedPulling="2026-02-21 00:10:45.704489718 +0000 UTC m=+247.716056673" lastFinishedPulling="2026-02-21 00:10:47.139800383 +0000 UTC m=+249.151367318" observedRunningTime="2026-02-21 00:11:06.958977971 +0000 UTC m=+268.970544906" watchObservedRunningTime="2026-02-21 00:11:31.396808833 +0000 UTC m=+293.408375768" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.399685 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lh9g8" podStartSLOduration=42.999462483 podStartE2EDuration="45.399676633s" podCreationTimestamp="2026-02-21 00:10:46 +0000 UTC" firstStartedPulling="2026-02-21 00:10:47.741999341 +0000 UTC m=+249.753566276" lastFinishedPulling="2026-02-21 00:10:50.142213491 +0000 UTC m=+252.153780426" observedRunningTime="2026-02-21 00:11:06.90168153 +0000 UTC m=+268.913248465" watchObservedRunningTime="2026-02-21 00:11:31.399676633 +0000 UTC m=+293.411243568" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.400108 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.400140 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.404054 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.419293 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.419276486 podStartE2EDuration="25.419276486s" podCreationTimestamp="2026-02-21 00:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:11:31.416049186 +0000 UTC m=+293.427616121" watchObservedRunningTime="2026-02-21 00:11:31.419276486 +0000 UTC m=+293.430843421" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.439810 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.473044 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.481200 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.491900 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.526894 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.560205 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.636670 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.687492 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.774806 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 21 00:11:31 crc kubenswrapper[4730]: I0221 00:11:31.799808 4730 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 21 00:11:32 crc kubenswrapper[4730]: I0221 00:11:32.334651 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 21 00:11:32 crc kubenswrapper[4730]: I0221 00:11:32.613010 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 21 00:11:32 crc kubenswrapper[4730]: I0221 00:11:32.711493 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:11:32 crc kubenswrapper[4730]: I0221 00:11:32.808018 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 21 00:11:32 crc kubenswrapper[4730]: I0221 00:11:32.872846 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 21 00:11:32 crc kubenswrapper[4730]: I0221 00:11:32.946703 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 21 00:11:33 crc kubenswrapper[4730]: I0221 00:11:33.084706 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 00:11:33 crc kubenswrapper[4730]: I0221 00:11:33.448047 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 21 00:11:33 crc kubenswrapper[4730]: I0221 00:11:33.567218 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 21 00:11:33 crc kubenswrapper[4730]: I0221 00:11:33.733077 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 21 00:11:33 crc kubenswrapper[4730]: I0221 00:11:33.799240 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 21 00:11:34 crc kubenswrapper[4730]: I0221 00:11:34.401627 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 21 00:11:34 crc kubenswrapper[4730]: I0221 00:11:34.804636 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 21 00:11:34 crc kubenswrapper[4730]: I0221 00:11:34.827880 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 21 00:11:35 crc kubenswrapper[4730]: I0221 00:11:35.267225 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 21 00:11:35 crc kubenswrapper[4730]: I0221 00:11:35.464301 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 21 00:11:38 crc kubenswrapper[4730]: I0221 00:11:38.451475 4730 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 21 00:11:40 crc kubenswrapper[4730]: I0221 00:11:40.812093 4730 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 21 00:11:40 crc kubenswrapper[4730]: I0221 00:11:40.812719 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://05f2d169b5d6b21d3bcb137e011152795d4f144acc4f614b2ea18f57f8936dff" gracePeriod=5 Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.161713 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.162025 4730 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="05f2d169b5d6b21d3bcb137e011152795d4f144acc4f614b2ea18f57f8936dff" exitCode=137 Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.412369 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.412427 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.552615 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.552691 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.552737 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.552767 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.552808 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.552847 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.552913 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.552924 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.553002 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.553413 4730 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.553446 4730 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.553464 4730 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.553480 4730 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.564658 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.654694 4730 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 21 00:11:46 crc kubenswrapper[4730]: I0221 00:11:46.699781 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 21 00:11:47 crc kubenswrapper[4730]: I0221 00:11:47.169551 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 21 00:11:47 crc kubenswrapper[4730]: I0221 00:11:47.169627 4730 scope.go:117] "RemoveContainer" containerID="05f2d169b5d6b21d3bcb137e011152795d4f144acc4f614b2ea18f57f8936dff" Feb 21 00:11:47 crc kubenswrapper[4730]: I0221 00:11:47.169671 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 00:12:24 crc kubenswrapper[4730]: I0221 00:12:24.322728 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:12:24 crc kubenswrapper[4730]: I0221 00:12:24.323318 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:12:54 crc kubenswrapper[4730]: I0221 00:12:54.322819 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:12:54 crc kubenswrapper[4730]: I0221 00:12:54.323421 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.241676 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zxdkr"] Feb 21 00:12:56 crc kubenswrapper[4730]: E0221 00:12:56.241926 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" containerName="installer" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.241964 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" containerName="installer" Feb 21 00:12:56 crc kubenswrapper[4730]: E0221 00:12:56.241981 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.241991 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.242111 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e062c181-c9d3-4cee-8284-27620adeafb0" containerName="installer" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.242122 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.242471 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.253336 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zxdkr"] Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.383535 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-registry-tls\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.383595 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-bound-sa-token\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.383627 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-trusted-ca\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.383661 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kvx8\" (UniqueName: \"kubernetes.io/projected/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-kube-api-access-2kvx8\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.383800 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.383852 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.383902 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-registry-certificates\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.384022 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.414059 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.485895 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-registry-tls\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.485971 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-bound-sa-token\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.485999 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-trusted-ca\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.486020 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kvx8\" (UniqueName: \"kubernetes.io/projected/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-kube-api-access-2kvx8\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.486049 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.486076 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-registry-certificates\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.486105 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.486560 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.487249 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-trusted-ca\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.487591 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-registry-certificates\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.492838 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.493705 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-registry-tls\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.502833 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-bound-sa-token\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.503411 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kvx8\" (UniqueName: \"kubernetes.io/projected/0aaa1e91-c3ec-4848-8da6-04f92e7e0279-kube-api-access-2kvx8\") pod \"image-registry-66df7c8f76-zxdkr\" (UID: \"0aaa1e91-c3ec-4848-8da6-04f92e7e0279\") " pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:56 crc kubenswrapper[4730]: I0221 00:12:56.558589 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:57 crc kubenswrapper[4730]: I0221 00:12:57.001274 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zxdkr"] Feb 21 00:12:57 crc kubenswrapper[4730]: W0221 00:12:57.007883 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aaa1e91_c3ec_4848_8da6_04f92e7e0279.slice/crio-e9e9d5bab21fc74363fe5deee37a09c6d2b73d1def1191b96f6c1bc05e89698e WatchSource:0}: Error finding container e9e9d5bab21fc74363fe5deee37a09c6d2b73d1def1191b96f6c1bc05e89698e: Status 404 returned error can't find the container with id e9e9d5bab21fc74363fe5deee37a09c6d2b73d1def1191b96f6c1bc05e89698e Feb 21 00:12:57 crc kubenswrapper[4730]: I0221 00:12:57.569763 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" event={"ID":"0aaa1e91-c3ec-4848-8da6-04f92e7e0279","Type":"ContainerStarted","Data":"1d0e90afdd546b29217edf8362c147523393cc348d32099ae3d47bf61f448fef"} Feb 21 00:12:57 crc kubenswrapper[4730]: I0221 00:12:57.569832 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" event={"ID":"0aaa1e91-c3ec-4848-8da6-04f92e7e0279","Type":"ContainerStarted","Data":"e9e9d5bab21fc74363fe5deee37a09c6d2b73d1def1191b96f6c1bc05e89698e"} Feb 21 00:12:57 crc kubenswrapper[4730]: I0221 00:12:57.570004 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:12:57 crc kubenswrapper[4730]: I0221 00:12:57.594580 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" podStartSLOduration=1.5945599160000001 podStartE2EDuration="1.594559916s" podCreationTimestamp="2026-02-21 00:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:12:57.590757364 +0000 UTC m=+379.602324329" watchObservedRunningTime="2026-02-21 00:12:57.594559916 +0000 UTC m=+379.606126891" Feb 21 00:13:16 crc kubenswrapper[4730]: I0221 00:13:16.564197 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-zxdkr" Feb 21 00:13:16 crc kubenswrapper[4730]: I0221 00:13:16.623254 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rp9n4"] Feb 21 00:13:24 crc kubenswrapper[4730]: I0221 00:13:24.322689 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:13:24 crc kubenswrapper[4730]: I0221 00:13:24.323237 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:13:24 crc kubenswrapper[4730]: I0221 00:13:24.323282 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:13:24 crc kubenswrapper[4730]: I0221 00:13:24.323830 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"123bfe2acdbe9e91356587500e55b8ee65695af687bb95717966abf26e1256ed"} pod="openshift-machine-config-operator/machine-config-daemon-plgd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 00:13:24 crc kubenswrapper[4730]: I0221 00:13:24.323879 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" containerID="cri-o://123bfe2acdbe9e91356587500e55b8ee65695af687bb95717966abf26e1256ed" gracePeriod=600 Feb 21 00:13:24 crc kubenswrapper[4730]: I0221 00:13:24.733046 4730 generic.go:334] "Generic (PLEG): container finished" podID="7622a560-9120-4202-b95a-246a806fe889" containerID="123bfe2acdbe9e91356587500e55b8ee65695af687bb95717966abf26e1256ed" exitCode=0 Feb 21 00:13:24 crc kubenswrapper[4730]: I0221 00:13:24.733281 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerDied","Data":"123bfe2acdbe9e91356587500e55b8ee65695af687bb95717966abf26e1256ed"} Feb 21 00:13:24 crc kubenswrapper[4730]: I0221 00:13:24.733351 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerStarted","Data":"24a66c9695cdd5120edbf23d4909db11bb0c6c079f6a9c66eb1e643203703abe"} Feb 21 00:13:24 crc kubenswrapper[4730]: I0221 00:13:24.733378 4730 scope.go:117] "RemoveContainer" containerID="6dd2831e0d75d5ff906db1a77e6f5627d3ac9914aa705cb26bd0b67bf0932477" Feb 21 00:13:41 crc kubenswrapper[4730]: I0221 00:13:41.674074 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" podUID="8deb8736-a138-49f3-9550-060511014aaf" containerName="registry" containerID="cri-o://1f8395a31b7ef7093b091b6a8a820d9d7779db91b38d74f6a6aa673692e64801" gracePeriod=30 Feb 21 00:13:41 crc kubenswrapper[4730]: I0221 00:13:41.839852 4730 generic.go:334] "Generic (PLEG): container finished" podID="8deb8736-a138-49f3-9550-060511014aaf" containerID="1f8395a31b7ef7093b091b6a8a820d9d7779db91b38d74f6a6aa673692e64801" exitCode=0 Feb 21 00:13:41 crc kubenswrapper[4730]: I0221 00:13:41.840106 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" event={"ID":"8deb8736-a138-49f3-9550-060511014aaf","Type":"ContainerDied","Data":"1f8395a31b7ef7093b091b6a8a820d9d7779db91b38d74f6a6aa673692e64801"} Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.113487 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.152337 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-bound-sa-token\") pod \"8deb8736-a138-49f3-9550-060511014aaf\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.152405 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8deb8736-a138-49f3-9550-060511014aaf-registry-certificates\") pod \"8deb8736-a138-49f3-9550-060511014aaf\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.152651 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8deb8736-a138-49f3-9550-060511014aaf\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.152687 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9zdk\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-kube-api-access-g9zdk\") pod \"8deb8736-a138-49f3-9550-060511014aaf\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.152742 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8deb8736-a138-49f3-9550-060511014aaf-installation-pull-secrets\") pod \"8deb8736-a138-49f3-9550-060511014aaf\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.152794 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-registry-tls\") pod \"8deb8736-a138-49f3-9550-060511014aaf\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.152831 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8deb8736-a138-49f3-9550-060511014aaf-trusted-ca\") pod \"8deb8736-a138-49f3-9550-060511014aaf\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.152867 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8deb8736-a138-49f3-9550-060511014aaf-ca-trust-extracted\") pod \"8deb8736-a138-49f3-9550-060511014aaf\" (UID: \"8deb8736-a138-49f3-9550-060511014aaf\") " Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.153636 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8deb8736-a138-49f3-9550-060511014aaf-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8deb8736-a138-49f3-9550-060511014aaf" (UID: "8deb8736-a138-49f3-9550-060511014aaf"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.153774 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8deb8736-a138-49f3-9550-060511014aaf-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8deb8736-a138-49f3-9550-060511014aaf" (UID: "8deb8736-a138-49f3-9550-060511014aaf"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.158608 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8deb8736-a138-49f3-9550-060511014aaf-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8deb8736-a138-49f3-9550-060511014aaf" (UID: "8deb8736-a138-49f3-9550-060511014aaf"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.159274 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-kube-api-access-g9zdk" (OuterVolumeSpecName: "kube-api-access-g9zdk") pod "8deb8736-a138-49f3-9550-060511014aaf" (UID: "8deb8736-a138-49f3-9550-060511014aaf"). InnerVolumeSpecName "kube-api-access-g9zdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.159480 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8deb8736-a138-49f3-9550-060511014aaf" (UID: "8deb8736-a138-49f3-9550-060511014aaf"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.159826 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8deb8736-a138-49f3-9550-060511014aaf" (UID: "8deb8736-a138-49f3-9550-060511014aaf"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.165753 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "8deb8736-a138-49f3-9550-060511014aaf" (UID: "8deb8736-a138-49f3-9550-060511014aaf"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.194398 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8deb8736-a138-49f3-9550-060511014aaf-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8deb8736-a138-49f3-9550-060511014aaf" (UID: "8deb8736-a138-49f3-9550-060511014aaf"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.254581 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9zdk\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-kube-api-access-g9zdk\") on node \"crc\" DevicePath \"\"" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.254636 4730 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8deb8736-a138-49f3-9550-060511014aaf-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.254652 4730 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.254666 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8deb8736-a138-49f3-9550-060511014aaf-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.254678 4730 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8deb8736-a138-49f3-9550-060511014aaf-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.254690 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8deb8736-a138-49f3-9550-060511014aaf-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.254703 4730 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8deb8736-a138-49f3-9550-060511014aaf-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.851136 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" event={"ID":"8deb8736-a138-49f3-9550-060511014aaf","Type":"ContainerDied","Data":"eaa01728f7e5579c4439ba10063f44dc1a603425599780b786e836262c2a4bc2"} Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.851715 4730 scope.go:117] "RemoveContainer" containerID="1f8395a31b7ef7093b091b6a8a820d9d7779db91b38d74f6a6aa673692e64801" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.851403 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rp9n4" Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.888135 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rp9n4"] Feb 21 00:13:42 crc kubenswrapper[4730]: I0221 00:13:42.893449 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rp9n4"] Feb 21 00:13:44 crc kubenswrapper[4730]: I0221 00:13:44.704337 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8deb8736-a138-49f3-9550-060511014aaf" path="/var/lib/kubelet/pods/8deb8736-a138-49f3-9550-060511014aaf/volumes" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.158753 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m"] Feb 21 00:15:00 crc kubenswrapper[4730]: E0221 00:15:00.159414 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8deb8736-a138-49f3-9550-060511014aaf" containerName="registry" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.159426 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8deb8736-a138-49f3-9550-060511014aaf" containerName="registry" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.159513 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8deb8736-a138-49f3-9550-060511014aaf" containerName="registry" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.159856 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.161968 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.163136 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.167519 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m"] Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.260574 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n9cq\" (UniqueName: \"kubernetes.io/projected/7b38660d-3f1f-44e8-848d-4337835ceec4-kube-api-access-7n9cq\") pod \"collect-profiles-29527215-hcm5m\" (UID: \"7b38660d-3f1f-44e8-848d-4337835ceec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.260826 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b38660d-3f1f-44e8-848d-4337835ceec4-config-volume\") pod \"collect-profiles-29527215-hcm5m\" (UID: \"7b38660d-3f1f-44e8-848d-4337835ceec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.260956 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b38660d-3f1f-44e8-848d-4337835ceec4-secret-volume\") pod \"collect-profiles-29527215-hcm5m\" (UID: \"7b38660d-3f1f-44e8-848d-4337835ceec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.362207 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n9cq\" (UniqueName: \"kubernetes.io/projected/7b38660d-3f1f-44e8-848d-4337835ceec4-kube-api-access-7n9cq\") pod \"collect-profiles-29527215-hcm5m\" (UID: \"7b38660d-3f1f-44e8-848d-4337835ceec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.362465 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b38660d-3f1f-44e8-848d-4337835ceec4-config-volume\") pod \"collect-profiles-29527215-hcm5m\" (UID: \"7b38660d-3f1f-44e8-848d-4337835ceec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.362495 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b38660d-3f1f-44e8-848d-4337835ceec4-secret-volume\") pod \"collect-profiles-29527215-hcm5m\" (UID: \"7b38660d-3f1f-44e8-848d-4337835ceec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.363375 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b38660d-3f1f-44e8-848d-4337835ceec4-config-volume\") pod \"collect-profiles-29527215-hcm5m\" (UID: \"7b38660d-3f1f-44e8-848d-4337835ceec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.370776 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b38660d-3f1f-44e8-848d-4337835ceec4-secret-volume\") pod \"collect-profiles-29527215-hcm5m\" (UID: \"7b38660d-3f1f-44e8-848d-4337835ceec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.382435 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n9cq\" (UniqueName: \"kubernetes.io/projected/7b38660d-3f1f-44e8-848d-4337835ceec4-kube-api-access-7n9cq\") pod \"collect-profiles-29527215-hcm5m\" (UID: \"7b38660d-3f1f-44e8-848d-4337835ceec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.476125 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" Feb 21 00:15:00 crc kubenswrapper[4730]: I0221 00:15:00.630748 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m"] Feb 21 00:15:01 crc kubenswrapper[4730]: I0221 00:15:01.342502 4730 generic.go:334] "Generic (PLEG): container finished" podID="7b38660d-3f1f-44e8-848d-4337835ceec4" containerID="69e60db53155f2ac2a921f0f99f9a584d2042c75cbf460ff84185bfdc9ae2f72" exitCode=0 Feb 21 00:15:01 crc kubenswrapper[4730]: I0221 00:15:01.342568 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" event={"ID":"7b38660d-3f1f-44e8-848d-4337835ceec4","Type":"ContainerDied","Data":"69e60db53155f2ac2a921f0f99f9a584d2042c75cbf460ff84185bfdc9ae2f72"} Feb 21 00:15:01 crc kubenswrapper[4730]: I0221 00:15:01.342623 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" event={"ID":"7b38660d-3f1f-44e8-848d-4337835ceec4","Type":"ContainerStarted","Data":"5f0ea0a54cc7d05b25c9585cd6e97f4e438925b17f7261b10c813506214e1f1f"} Feb 21 00:15:02 crc kubenswrapper[4730]: I0221 00:15:02.624477 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" Feb 21 00:15:02 crc kubenswrapper[4730]: I0221 00:15:02.791009 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b38660d-3f1f-44e8-848d-4337835ceec4-config-volume\") pod \"7b38660d-3f1f-44e8-848d-4337835ceec4\" (UID: \"7b38660d-3f1f-44e8-848d-4337835ceec4\") " Feb 21 00:15:02 crc kubenswrapper[4730]: I0221 00:15:02.791047 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b38660d-3f1f-44e8-848d-4337835ceec4-secret-volume\") pod \"7b38660d-3f1f-44e8-848d-4337835ceec4\" (UID: \"7b38660d-3f1f-44e8-848d-4337835ceec4\") " Feb 21 00:15:02 crc kubenswrapper[4730]: I0221 00:15:02.791143 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n9cq\" (UniqueName: \"kubernetes.io/projected/7b38660d-3f1f-44e8-848d-4337835ceec4-kube-api-access-7n9cq\") pod \"7b38660d-3f1f-44e8-848d-4337835ceec4\" (UID: \"7b38660d-3f1f-44e8-848d-4337835ceec4\") " Feb 21 00:15:02 crc kubenswrapper[4730]: I0221 00:15:02.792571 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b38660d-3f1f-44e8-848d-4337835ceec4-config-volume" (OuterVolumeSpecName: "config-volume") pod "7b38660d-3f1f-44e8-848d-4337835ceec4" (UID: "7b38660d-3f1f-44e8-848d-4337835ceec4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:15:02 crc kubenswrapper[4730]: I0221 00:15:02.797157 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b38660d-3f1f-44e8-848d-4337835ceec4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7b38660d-3f1f-44e8-848d-4337835ceec4" (UID: "7b38660d-3f1f-44e8-848d-4337835ceec4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:15:02 crc kubenswrapper[4730]: I0221 00:15:02.798449 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b38660d-3f1f-44e8-848d-4337835ceec4-kube-api-access-7n9cq" (OuterVolumeSpecName: "kube-api-access-7n9cq") pod "7b38660d-3f1f-44e8-848d-4337835ceec4" (UID: "7b38660d-3f1f-44e8-848d-4337835ceec4"). InnerVolumeSpecName "kube-api-access-7n9cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:15:02 crc kubenswrapper[4730]: I0221 00:15:02.892916 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b38660d-3f1f-44e8-848d-4337835ceec4-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:02 crc kubenswrapper[4730]: I0221 00:15:02.892990 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b38660d-3f1f-44e8-848d-4337835ceec4-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:02 crc kubenswrapper[4730]: I0221 00:15:02.893002 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n9cq\" (UniqueName: \"kubernetes.io/projected/7b38660d-3f1f-44e8-848d-4337835ceec4-kube-api-access-7n9cq\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:03 crc kubenswrapper[4730]: I0221 00:15:03.356687 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" event={"ID":"7b38660d-3f1f-44e8-848d-4337835ceec4","Type":"ContainerDied","Data":"5f0ea0a54cc7d05b25c9585cd6e97f4e438925b17f7261b10c813506214e1f1f"} Feb 21 00:15:03 crc kubenswrapper[4730]: I0221 00:15:03.356719 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527215-hcm5m" Feb 21 00:15:03 crc kubenswrapper[4730]: I0221 00:15:03.356722 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f0ea0a54cc7d05b25c9585cd6e97f4e438925b17f7261b10c813506214e1f1f" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.075374 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kp9wk"] Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.076010 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovn-controller" containerID="cri-o://2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7" gracePeriod=30 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.076024 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="nbdb" containerID="cri-o://15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127" gracePeriod=30 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.076066 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7" gracePeriod=30 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.076166 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovn-acl-logging" containerID="cri-o://65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c" gracePeriod=30 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.076177 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="northd" containerID="cri-o://97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b" gracePeriod=30 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.076150 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="kube-rbac-proxy-node" containerID="cri-o://db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163" gracePeriod=30 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.076150 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="sbdb" containerID="cri-o://cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088" gracePeriod=30 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.122243 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" containerID="cri-o://676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0" gracePeriod=30 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.391826 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/3.log" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.394867 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovn-acl-logging/0.log" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.395445 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovn-controller/0.log" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.396099 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.417957 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gsndg_900f07ef-9762-49ec-9551-41a6ce12659d/kube-multus/2.log" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.419510 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gsndg_900f07ef-9762-49ec-9551-41a6ce12659d/kube-multus/1.log" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.419549 4730 generic.go:334] "Generic (PLEG): container finished" podID="900f07ef-9762-49ec-9551-41a6ce12659d" containerID="0b650c7255c3c155b3ce35f6ad60891b9b04293ed0b8791fd3e24881b2f2c55a" exitCode=2 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.419607 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gsndg" event={"ID":"900f07ef-9762-49ec-9551-41a6ce12659d","Type":"ContainerDied","Data":"0b650c7255c3c155b3ce35f6ad60891b9b04293ed0b8791fd3e24881b2f2c55a"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.419642 4730 scope.go:117] "RemoveContainer" containerID="510a72ec0d60c9f63d204030f0b90470c6f0d2885e680d5ef94e87553b1ce833" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.420107 4730 scope.go:117] "RemoveContainer" containerID="0b650c7255c3c155b3ce35f6ad60891b9b04293ed0b8791fd3e24881b2f2c55a" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.420392 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gsndg_openshift-multus(900f07ef-9762-49ec-9551-41a6ce12659d)\"" pod="openshift-multus/multus-gsndg" podUID="900f07ef-9762-49ec-9551-41a6ce12659d" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.429666 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovnkube-controller/3.log" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.435596 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovn-acl-logging/0.log" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.438524 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kp9wk_c6272ef5-e657-4f64-a217-305dddfe36cd/ovn-controller/0.log" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439067 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerID="676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0" exitCode=0 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439116 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerID="cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088" exitCode=0 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439126 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerID="15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127" exitCode=0 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439133 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerID="97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b" exitCode=0 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439140 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerID="ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7" exitCode=0 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439148 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerID="db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163" exitCode=0 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439155 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerID="65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c" exitCode=143 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439161 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerID="2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7" exitCode=143 Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439181 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439205 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439215 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439225 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439236 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439245 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439255 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439265 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439272 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439277 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439282 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439287 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439292 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439298 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439303 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439308 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439315 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439322 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439328 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439333 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439338 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439344 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439349 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439354 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439359 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439364 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439368 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439375 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439382 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439388 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439393 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439398 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439403 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439408 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439412 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439417 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439423 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439428 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439435 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" event={"ID":"c6272ef5-e657-4f64-a217-305dddfe36cd","Type":"ContainerDied","Data":"7a9a898b68a8e76634ab931e756163c6ffc6a9713cedb2b2db777b0abb8b602e"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439445 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439450 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439455 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439460 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439465 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439470 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439475 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439481 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439485 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439491 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5"} Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.439607 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kp9wk" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.456856 4730 scope.go:117] "RemoveContainer" containerID="676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463291 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5nv97"] Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463496 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="sbdb" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463507 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="sbdb" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463516 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463521 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463528 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="nbdb" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463535 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="nbdb" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463542 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463549 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463576 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="kube-rbac-proxy-node" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463582 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="kube-rbac-proxy-node" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463590 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovn-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463596 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovn-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463604 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="kubecfg-setup" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463611 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="kubecfg-setup" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463620 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="northd" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463626 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="northd" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463633 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463639 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463645 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovn-acl-logging" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463650 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovn-acl-logging" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463657 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463663 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463675 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="kube-rbac-proxy-ovn-metrics" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463682 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="kube-rbac-proxy-ovn-metrics" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463690 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b38660d-3f1f-44e8-848d-4337835ceec4" containerName="collect-profiles" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463696 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b38660d-3f1f-44e8-848d-4337835ceec4" containerName="collect-profiles" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463780 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="kube-rbac-proxy-ovn-metrics" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463791 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovn-acl-logging" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463797 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="sbdb" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463804 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463810 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463817 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b38660d-3f1f-44e8-848d-4337835ceec4" containerName="collect-profiles" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463825 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovn-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463834 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463841 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="nbdb" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463850 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="northd" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463859 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="kube-rbac-proxy-node" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.463956 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.463964 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.464054 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.464063 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" containerName="ovnkube-controller" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.465561 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.478844 4730 scope.go:117] "RemoveContainer" containerID="c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.498230 4730 scope.go:117] "RemoveContainer" containerID="cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.512151 4730 scope.go:117] "RemoveContainer" containerID="15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.513997 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-etc-openvswitch\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514046 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-systemd\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514069 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-cni-netd\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514095 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6272ef5-e657-4f64-a217-305dddfe36cd-ovn-node-metrics-cert\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514128 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-run-ovn-kubernetes\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514150 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-run-netns\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514173 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-ovn\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514191 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-node-log\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514221 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-ovnkube-script-lib\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514253 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-ovnkube-config\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514279 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzw4b\" (UniqueName: \"kubernetes.io/projected/c6272ef5-e657-4f64-a217-305dddfe36cd-kube-api-access-wzw4b\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514304 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-slash\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514329 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-kubelet\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514345 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-systemd-units\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514364 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-openvswitch\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514380 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514403 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-cni-bin\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514424 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-env-overrides\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514444 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-var-lib-openvswitch\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514476 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-log-socket\") pod \"c6272ef5-e657-4f64-a217-305dddfe36cd\" (UID: \"c6272ef5-e657-4f64-a217-305dddfe36cd\") " Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514699 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-log-socket" (OuterVolumeSpecName: "log-socket") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.514732 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.515898 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.516056 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-slash" (OuterVolumeSpecName: "host-slash") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.516096 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.516176 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.516208 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.516740 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.518001 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.518045 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.518069 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.518196 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.518263 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-node-log" (OuterVolumeSpecName: "node-log") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.518306 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.518402 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.518702 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.518713 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.520962 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6272ef5-e657-4f64-a217-305dddfe36cd-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.521398 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6272ef5-e657-4f64-a217-305dddfe36cd-kube-api-access-wzw4b" (OuterVolumeSpecName: "kube-api-access-wzw4b") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "kube-api-access-wzw4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.527433 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c6272ef5-e657-4f64-a217-305dddfe36cd" (UID: "c6272ef5-e657-4f64-a217-305dddfe36cd"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.529172 4730 scope.go:117] "RemoveContainer" containerID="97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.541425 4730 scope.go:117] "RemoveContainer" containerID="ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.552235 4730 scope.go:117] "RemoveContainer" containerID="db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.561570 4730 scope.go:117] "RemoveContainer" containerID="65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.570982 4730 scope.go:117] "RemoveContainer" containerID="2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.582855 4730 scope.go:117] "RemoveContainer" containerID="e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.597174 4730 scope.go:117] "RemoveContainer" containerID="676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.597557 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0\": container with ID starting with 676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0 not found: ID does not exist" containerID="676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.597594 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0"} err="failed to get container status \"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0\": rpc error: code = NotFound desc = could not find container \"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0\": container with ID starting with 676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.597614 4730 scope.go:117] "RemoveContainer" containerID="c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.597887 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\": container with ID starting with c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9 not found: ID does not exist" containerID="c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.597919 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9"} err="failed to get container status \"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\": rpc error: code = NotFound desc = could not find container \"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\": container with ID starting with c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.597960 4730 scope.go:117] "RemoveContainer" containerID="cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.598341 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\": container with ID starting with cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088 not found: ID does not exist" containerID="cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.598362 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088"} err="failed to get container status \"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\": rpc error: code = NotFound desc = could not find container \"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\": container with ID starting with cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.598376 4730 scope.go:117] "RemoveContainer" containerID="15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.598639 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\": container with ID starting with 15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127 not found: ID does not exist" containerID="15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.598658 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127"} err="failed to get container status \"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\": rpc error: code = NotFound desc = could not find container \"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\": container with ID starting with 15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.598669 4730 scope.go:117] "RemoveContainer" containerID="97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.598895 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\": container with ID starting with 97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b not found: ID does not exist" containerID="97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.598916 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b"} err="failed to get container status \"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\": rpc error: code = NotFound desc = could not find container \"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\": container with ID starting with 97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.598931 4730 scope.go:117] "RemoveContainer" containerID="ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.599253 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\": container with ID starting with ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7 not found: ID does not exist" containerID="ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.599281 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7"} err="failed to get container status \"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\": rpc error: code = NotFound desc = could not find container \"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\": container with ID starting with ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.599298 4730 scope.go:117] "RemoveContainer" containerID="db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.599655 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\": container with ID starting with db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163 not found: ID does not exist" containerID="db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.599675 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163"} err="failed to get container status \"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\": rpc error: code = NotFound desc = could not find container \"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\": container with ID starting with db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.599687 4730 scope.go:117] "RemoveContainer" containerID="65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.599926 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\": container with ID starting with 65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c not found: ID does not exist" containerID="65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.599956 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c"} err="failed to get container status \"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\": rpc error: code = NotFound desc = could not find container \"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\": container with ID starting with 65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.599968 4730 scope.go:117] "RemoveContainer" containerID="2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.600171 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\": container with ID starting with 2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7 not found: ID does not exist" containerID="2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.600198 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7"} err="failed to get container status \"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\": rpc error: code = NotFound desc = could not find container \"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\": container with ID starting with 2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.600270 4730 scope.go:117] "RemoveContainer" containerID="e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5" Feb 21 00:15:12 crc kubenswrapper[4730]: E0221 00:15:12.600566 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\": container with ID starting with e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5 not found: ID does not exist" containerID="e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.600588 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5"} err="failed to get container status \"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\": rpc error: code = NotFound desc = could not find container \"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\": container with ID starting with e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.600605 4730 scope.go:117] "RemoveContainer" containerID="676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.600854 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0"} err="failed to get container status \"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0\": rpc error: code = NotFound desc = could not find container \"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0\": container with ID starting with 676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.600869 4730 scope.go:117] "RemoveContainer" containerID="c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.601121 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9"} err="failed to get container status \"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\": rpc error: code = NotFound desc = could not find container \"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\": container with ID starting with c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.601140 4730 scope.go:117] "RemoveContainer" containerID="cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.601356 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088"} err="failed to get container status \"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\": rpc error: code = NotFound desc = could not find container \"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\": container with ID starting with cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.601371 4730 scope.go:117] "RemoveContainer" containerID="15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.601560 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127"} err="failed to get container status \"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\": rpc error: code = NotFound desc = could not find container \"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\": container with ID starting with 15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.601587 4730 scope.go:117] "RemoveContainer" containerID="97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.601809 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b"} err="failed to get container status \"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\": rpc error: code = NotFound desc = could not find container \"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\": container with ID starting with 97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.601831 4730 scope.go:117] "RemoveContainer" containerID="ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.602113 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7"} err="failed to get container status \"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\": rpc error: code = NotFound desc = could not find container \"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\": container with ID starting with ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.602132 4730 scope.go:117] "RemoveContainer" containerID="db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.602330 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163"} err="failed to get container status \"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\": rpc error: code = NotFound desc = could not find container \"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\": container with ID starting with db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.602347 4730 scope.go:117] "RemoveContainer" containerID="65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.602536 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c"} err="failed to get container status \"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\": rpc error: code = NotFound desc = could not find container \"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\": container with ID starting with 65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.602555 4730 scope.go:117] "RemoveContainer" containerID="2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.602769 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7"} err="failed to get container status \"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\": rpc error: code = NotFound desc = could not find container \"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\": container with ID starting with 2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.602785 4730 scope.go:117] "RemoveContainer" containerID="e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.603025 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5"} err="failed to get container status \"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\": rpc error: code = NotFound desc = could not find container \"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\": container with ID starting with e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.603041 4730 scope.go:117] "RemoveContainer" containerID="676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.603280 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0"} err="failed to get container status \"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0\": rpc error: code = NotFound desc = could not find container \"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0\": container with ID starting with 676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.603327 4730 scope.go:117] "RemoveContainer" containerID="c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.603552 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9"} err="failed to get container status \"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\": rpc error: code = NotFound desc = could not find container \"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\": container with ID starting with c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.603574 4730 scope.go:117] "RemoveContainer" containerID="cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.603762 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088"} err="failed to get container status \"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\": rpc error: code = NotFound desc = could not find container \"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\": container with ID starting with cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.603785 4730 scope.go:117] "RemoveContainer" containerID="15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.604013 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127"} err="failed to get container status \"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\": rpc error: code = NotFound desc = could not find container \"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\": container with ID starting with 15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.604028 4730 scope.go:117] "RemoveContainer" containerID="97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.604247 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b"} err="failed to get container status \"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\": rpc error: code = NotFound desc = could not find container \"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\": container with ID starting with 97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.604264 4730 scope.go:117] "RemoveContainer" containerID="ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.604504 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7"} err="failed to get container status \"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\": rpc error: code = NotFound desc = could not find container \"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\": container with ID starting with ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.604525 4730 scope.go:117] "RemoveContainer" containerID="db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.604761 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163"} err="failed to get container status \"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\": rpc error: code = NotFound desc = could not find container \"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\": container with ID starting with db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.604777 4730 scope.go:117] "RemoveContainer" containerID="65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.605010 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c"} err="failed to get container status \"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\": rpc error: code = NotFound desc = could not find container \"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\": container with ID starting with 65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.605025 4730 scope.go:117] "RemoveContainer" containerID="2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.605247 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7"} err="failed to get container status \"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\": rpc error: code = NotFound desc = could not find container \"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\": container with ID starting with 2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.605270 4730 scope.go:117] "RemoveContainer" containerID="e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.605520 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5"} err="failed to get container status \"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\": rpc error: code = NotFound desc = could not find container \"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\": container with ID starting with e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.605540 4730 scope.go:117] "RemoveContainer" containerID="676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.605815 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0"} err="failed to get container status \"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0\": rpc error: code = NotFound desc = could not find container \"676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0\": container with ID starting with 676fc7780f0ff91a8e2890bfcc51e182e21ff8f94f798ddadba497738b6671a0 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.605832 4730 scope.go:117] "RemoveContainer" containerID="c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.606100 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9"} err="failed to get container status \"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\": rpc error: code = NotFound desc = could not find container \"c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9\": container with ID starting with c48d80ecd3b485c699dc6bfcbf903fee6c676610b7736f4731c9555574bd49a9 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.606117 4730 scope.go:117] "RemoveContainer" containerID="cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.606330 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088"} err="failed to get container status \"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\": rpc error: code = NotFound desc = could not find container \"cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088\": container with ID starting with cbd79edc0a96845fc3c0f77e819c088f0c1faad05e55abec0352a0deeba83088 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.606349 4730 scope.go:117] "RemoveContainer" containerID="15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.606537 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127"} err="failed to get container status \"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\": rpc error: code = NotFound desc = could not find container \"15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127\": container with ID starting with 15c3519b099abaa6b3ae754f6be49f608f6ed192c4dbc0cfdce91bfa2d049127 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.606549 4730 scope.go:117] "RemoveContainer" containerID="97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.606745 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b"} err="failed to get container status \"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\": rpc error: code = NotFound desc = could not find container \"97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b\": container with ID starting with 97df98502e563e525574606078a57e38fd8faaf41d10d64067a75a3f3f63822b not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.606760 4730 scope.go:117] "RemoveContainer" containerID="ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.607069 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7"} err="failed to get container status \"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\": rpc error: code = NotFound desc = could not find container \"ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7\": container with ID starting with ea445cc0db9587441b8a9fd35e107c10211466d8d3123fa188703cbc2dc921a7 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.607112 4730 scope.go:117] "RemoveContainer" containerID="db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.607319 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163"} err="failed to get container status \"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\": rpc error: code = NotFound desc = could not find container \"db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163\": container with ID starting with db9e066f3478ed6570890ac02745b269a48c2f23f50f19d6b2ae2160eaddd163 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.607334 4730 scope.go:117] "RemoveContainer" containerID="65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.607591 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c"} err="failed to get container status \"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\": rpc error: code = NotFound desc = could not find container \"65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c\": container with ID starting with 65764a529813469a5e3dd7985d799d5f761845d4e797f225b125fec1f4e5499c not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.607612 4730 scope.go:117] "RemoveContainer" containerID="2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.607857 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7"} err="failed to get container status \"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\": rpc error: code = NotFound desc = could not find container \"2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7\": container with ID starting with 2bb9adb3e7a3fd5afb4f261eb0921bc3249d7653958e31939e31671ab03cdaf7 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.607875 4730 scope.go:117] "RemoveContainer" containerID="e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.608138 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5"} err="failed to get container status \"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\": rpc error: code = NotFound desc = could not find container \"e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5\": container with ID starting with e9aa8a6ee418d7f8fbc26bb970c4e9490311f3e64be168943665e2ce20664db5 not found: ID does not exist" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616120 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-run-ovn-kubernetes\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616152 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b25e4c80-84f0-4e09-b147-2d2679e59873-ovnkube-script-lib\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616170 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-slash\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616189 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-node-log\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616208 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b25e4c80-84f0-4e09-b147-2d2679e59873-ovn-node-metrics-cert\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616225 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-kubelet\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616240 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-log-socket\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616268 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-cni-netd\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616283 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-run-systemd\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616296 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-run-openvswitch\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616310 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b25e4c80-84f0-4e09-b147-2d2679e59873-ovnkube-config\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616536 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b25e4c80-84f0-4e09-b147-2d2679e59873-env-overrides\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616603 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-systemd-units\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616658 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616694 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-run-netns\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616727 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-var-lib-openvswitch\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616795 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-etc-openvswitch\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616819 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjj28\" (UniqueName: \"kubernetes.io/projected/b25e4c80-84f0-4e09-b147-2d2679e59873-kube-api-access-tjj28\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616843 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-run-ovn\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.616876 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-cni-bin\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617001 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617012 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzw4b\" (UniqueName: \"kubernetes.io/projected/c6272ef5-e657-4f64-a217-305dddfe36cd-kube-api-access-wzw4b\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617022 4730 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-slash\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617031 4730 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617039 4730 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617047 4730 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617057 4730 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617066 4730 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617075 4730 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617083 4730 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617094 4730 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-log-socket\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617102 4730 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617111 4730 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617120 4730 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617129 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6272ef5-e657-4f64-a217-305dddfe36cd-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617138 4730 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617146 4730 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617156 4730 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617165 4730 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6272ef5-e657-4f64-a217-305dddfe36cd-node-log\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.617176 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6272ef5-e657-4f64-a217-305dddfe36cd-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.718364 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-slash\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.718567 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-slash\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.718825 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-run-ovn-kubernetes\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.718899 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-run-ovn-kubernetes\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.719010 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b25e4c80-84f0-4e09-b147-2d2679e59873-ovnkube-script-lib\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.719481 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-node-log\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.719551 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b25e4c80-84f0-4e09-b147-2d2679e59873-ovn-node-metrics-cert\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.719596 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-log-socket\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.719597 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-node-log\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.719674 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-log-socket\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.719732 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-kubelet\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.719807 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-cni-netd\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.719852 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-run-systemd\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.719895 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-run-openvswitch\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.719934 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b25e4c80-84f0-4e09-b147-2d2679e59873-ovnkube-config\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720033 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b25e4c80-84f0-4e09-b147-2d2679e59873-env-overrides\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720082 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-cni-netd\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720095 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-systemd-units\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720145 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-systemd-units\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.719975 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-kubelet\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720210 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720258 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-run-netns\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720311 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-var-lib-openvswitch\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720343 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-run-netns\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720369 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-etc-openvswitch\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720110 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-run-systemd\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720400 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjj28\" (UniqueName: \"kubernetes.io/projected/b25e4c80-84f0-4e09-b147-2d2679e59873-kube-api-access-tjj28\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720431 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-run-ovn\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720437 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-var-lib-openvswitch\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720469 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-cni-bin\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720561 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-cni-bin\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720610 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-run-ovn\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720621 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-etc-openvswitch\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720644 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-run-openvswitch\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720815 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b25e4c80-84f0-4e09-b147-2d2679e59873-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.720909 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b25e4c80-84f0-4e09-b147-2d2679e59873-env-overrides\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.721601 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b25e4c80-84f0-4e09-b147-2d2679e59873-ovnkube-config\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.722923 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b25e4c80-84f0-4e09-b147-2d2679e59873-ovnkube-script-lib\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.723010 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b25e4c80-84f0-4e09-b147-2d2679e59873-ovn-node-metrics-cert\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.740925 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjj28\" (UniqueName: \"kubernetes.io/projected/b25e4c80-84f0-4e09-b147-2d2679e59873-kube-api-access-tjj28\") pod \"ovnkube-node-5nv97\" (UID: \"b25e4c80-84f0-4e09-b147-2d2679e59873\") " pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.771708 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kp9wk"] Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.776247 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kp9wk"] Feb 21 00:15:12 crc kubenswrapper[4730]: I0221 00:15:12.779187 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:13 crc kubenswrapper[4730]: I0221 00:15:13.450767 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gsndg_900f07ef-9762-49ec-9551-41a6ce12659d/kube-multus/2.log" Feb 21 00:15:13 crc kubenswrapper[4730]: I0221 00:15:13.454843 4730 generic.go:334] "Generic (PLEG): container finished" podID="b25e4c80-84f0-4e09-b147-2d2679e59873" containerID="c4367fdb2c5cde85ada4be8ed6df8d0b24d5a67805182a6d510a97c4b8136082" exitCode=0 Feb 21 00:15:13 crc kubenswrapper[4730]: I0221 00:15:13.454911 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" event={"ID":"b25e4c80-84f0-4e09-b147-2d2679e59873","Type":"ContainerDied","Data":"c4367fdb2c5cde85ada4be8ed6df8d0b24d5a67805182a6d510a97c4b8136082"} Feb 21 00:15:13 crc kubenswrapper[4730]: I0221 00:15:13.455009 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" event={"ID":"b25e4c80-84f0-4e09-b147-2d2679e59873","Type":"ContainerStarted","Data":"309b38908331e9484ee6dcfa3240be422d8afc4c6a3b72b5bfd26a18a44af684"} Feb 21 00:15:14 crc kubenswrapper[4730]: I0221 00:15:14.466600 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" event={"ID":"b25e4c80-84f0-4e09-b147-2d2679e59873","Type":"ContainerStarted","Data":"f123eebd0b2ad12f34c56a2b891903fd7eeb836a7bb7c8e6e4cf371df5be1753"} Feb 21 00:15:14 crc kubenswrapper[4730]: I0221 00:15:14.467284 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" event={"ID":"b25e4c80-84f0-4e09-b147-2d2679e59873","Type":"ContainerStarted","Data":"e7a496b66cb8fd417ed189737d0b96c743d9a8e7b6fa218b4ed83c1d752a54fb"} Feb 21 00:15:14 crc kubenswrapper[4730]: I0221 00:15:14.467308 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" event={"ID":"b25e4c80-84f0-4e09-b147-2d2679e59873","Type":"ContainerStarted","Data":"da2e349da1cf5ae304ace6bed62b68462fb2728958283a49fa84c2fc1e309a97"} Feb 21 00:15:14 crc kubenswrapper[4730]: I0221 00:15:14.467328 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" event={"ID":"b25e4c80-84f0-4e09-b147-2d2679e59873","Type":"ContainerStarted","Data":"a02500552be75b611b7495cf30fda190f239cb3ca0fece21e28c110c9d5b9b99"} Feb 21 00:15:14 crc kubenswrapper[4730]: I0221 00:15:14.467344 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" event={"ID":"b25e4c80-84f0-4e09-b147-2d2679e59873","Type":"ContainerStarted","Data":"c1172b66c7ad86fac0f3b80d4c93770ab909de79f903a507db5a06eacbc10ad7"} Feb 21 00:15:14 crc kubenswrapper[4730]: I0221 00:15:14.467361 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" event={"ID":"b25e4c80-84f0-4e09-b147-2d2679e59873","Type":"ContainerStarted","Data":"2916b6fa0dc92a43a19c2a8ed641318af93f2f8ea8dbb5bbff8946dc0fdb0a4a"} Feb 21 00:15:14 crc kubenswrapper[4730]: I0221 00:15:14.701058 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6272ef5-e657-4f64-a217-305dddfe36cd" path="/var/lib/kubelet/pods/c6272ef5-e657-4f64-a217-305dddfe36cd/volumes" Feb 21 00:15:17 crc kubenswrapper[4730]: I0221 00:15:17.494057 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" event={"ID":"b25e4c80-84f0-4e09-b147-2d2679e59873","Type":"ContainerStarted","Data":"4e9dc1f818531be18786068fa0761532c68f99cf76b04d3737031f7cbe05a94c"} Feb 21 00:15:19 crc kubenswrapper[4730]: I0221 00:15:19.508337 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" event={"ID":"b25e4c80-84f0-4e09-b147-2d2679e59873","Type":"ContainerStarted","Data":"b3be4f5a40ae7ae772a2e00e0c85514cb8566bb12449a5ca56f32698279be770"} Feb 21 00:15:19 crc kubenswrapper[4730]: I0221 00:15:19.508685 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:19 crc kubenswrapper[4730]: I0221 00:15:19.508700 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:19 crc kubenswrapper[4730]: I0221 00:15:19.508711 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:19 crc kubenswrapper[4730]: I0221 00:15:19.533503 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:19 crc kubenswrapper[4730]: I0221 00:15:19.536294 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:19 crc kubenswrapper[4730]: I0221 00:15:19.555597 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" podStartSLOduration=7.555573703 podStartE2EDuration="7.555573703s" podCreationTimestamp="2026-02-21 00:15:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:15:19.55103029 +0000 UTC m=+521.562597245" watchObservedRunningTime="2026-02-21 00:15:19.555573703 +0000 UTC m=+521.567140648" Feb 21 00:15:24 crc kubenswrapper[4730]: I0221 00:15:24.322723 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:15:24 crc kubenswrapper[4730]: I0221 00:15:24.323368 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:15:25 crc kubenswrapper[4730]: I0221 00:15:25.693020 4730 scope.go:117] "RemoveContainer" containerID="0b650c7255c3c155b3ce35f6ad60891b9b04293ed0b8791fd3e24881b2f2c55a" Feb 21 00:15:25 crc kubenswrapper[4730]: E0221 00:15:25.693411 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gsndg_openshift-multus(900f07ef-9762-49ec-9551-41a6ce12659d)\"" pod="openshift-multus/multus-gsndg" podUID="900f07ef-9762-49ec-9551-41a6ce12659d" Feb 21 00:15:36 crc kubenswrapper[4730]: I0221 00:15:36.694396 4730 scope.go:117] "RemoveContainer" containerID="0b650c7255c3c155b3ce35f6ad60891b9b04293ed0b8791fd3e24881b2f2c55a" Feb 21 00:15:37 crc kubenswrapper[4730]: I0221 00:15:37.640565 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gsndg_900f07ef-9762-49ec-9551-41a6ce12659d/kube-multus/2.log" Feb 21 00:15:37 crc kubenswrapper[4730]: I0221 00:15:37.641065 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gsndg" event={"ID":"900f07ef-9762-49ec-9551-41a6ce12659d","Type":"ContainerStarted","Data":"039ea0e493d822d06add64e4a2ad0c2b39b1f288dcb1ad54efac7151abdaa42e"} Feb 21 00:15:42 crc kubenswrapper[4730]: I0221 00:15:42.809215 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5nv97" Feb 21 00:15:54 crc kubenswrapper[4730]: I0221 00:15:54.322912 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:15:54 crc kubenswrapper[4730]: I0221 00:15:54.323516 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:16:18 crc kubenswrapper[4730]: I0221 00:16:18.618047 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xfbv"] Feb 21 00:16:18 crc kubenswrapper[4730]: I0221 00:16:18.619367 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4xfbv" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" containerName="registry-server" containerID="cri-o://2692859bfd18689854990e4ec11ba855f44f88508c12da5825d8036390c7a2e4" gracePeriod=30 Feb 21 00:16:18 crc kubenswrapper[4730]: I0221 00:16:18.901253 4730 generic.go:334] "Generic (PLEG): container finished" podID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" containerID="2692859bfd18689854990e4ec11ba855f44f88508c12da5825d8036390c7a2e4" exitCode=0 Feb 21 00:16:18 crc kubenswrapper[4730]: I0221 00:16:18.901297 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xfbv" event={"ID":"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067","Type":"ContainerDied","Data":"2692859bfd18689854990e4ec11ba855f44f88508c12da5825d8036390c7a2e4"} Feb 21 00:16:18 crc kubenswrapper[4730]: I0221 00:16:18.973329 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:16:18 crc kubenswrapper[4730]: I0221 00:16:18.999206 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-catalog-content\") pod \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\" (UID: \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\") " Feb 21 00:16:18 crc kubenswrapper[4730]: I0221 00:16:18.999275 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h76j7\" (UniqueName: \"kubernetes.io/projected/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-kube-api-access-h76j7\") pod \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\" (UID: \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\") " Feb 21 00:16:18 crc kubenswrapper[4730]: I0221 00:16:18.999415 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-utilities\") pod \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\" (UID: \"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067\") " Feb 21 00:16:19 crc kubenswrapper[4730]: I0221 00:16:19.001775 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-utilities" (OuterVolumeSpecName: "utilities") pod "9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" (UID: "9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:16:19 crc kubenswrapper[4730]: I0221 00:16:19.008600 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-kube-api-access-h76j7" (OuterVolumeSpecName: "kube-api-access-h76j7") pod "9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" (UID: "9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067"). InnerVolumeSpecName "kube-api-access-h76j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:16:19 crc kubenswrapper[4730]: I0221 00:16:19.038244 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" (UID: "9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:16:19 crc kubenswrapper[4730]: I0221 00:16:19.101476 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:16:19 crc kubenswrapper[4730]: I0221 00:16:19.101523 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:16:19 crc kubenswrapper[4730]: I0221 00:16:19.101539 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h76j7\" (UniqueName: \"kubernetes.io/projected/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067-kube-api-access-h76j7\") on node \"crc\" DevicePath \"\"" Feb 21 00:16:19 crc kubenswrapper[4730]: I0221 00:16:19.909750 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xfbv" event={"ID":"9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067","Type":"ContainerDied","Data":"ec670af7d41ae3dec892d16dde33150fcd7edd253a8b12b47bc5bc3769cd4eb7"} Feb 21 00:16:19 crc kubenswrapper[4730]: I0221 00:16:19.909809 4730 scope.go:117] "RemoveContainer" containerID="2692859bfd18689854990e4ec11ba855f44f88508c12da5825d8036390c7a2e4" Feb 21 00:16:19 crc kubenswrapper[4730]: I0221 00:16:19.909861 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xfbv" Feb 21 00:16:19 crc kubenswrapper[4730]: I0221 00:16:19.927345 4730 scope.go:117] "RemoveContainer" containerID="d7f95bf88a2621a491fd84240851d5ff4a1c99c82179ceff16326f5b40123293" Feb 21 00:16:19 crc kubenswrapper[4730]: I0221 00:16:19.949290 4730 scope.go:117] "RemoveContainer" containerID="7efad4e9e0e70fa5b2ca8b3303f62f6fd5ba0eb6a979c4ea088387565482a4e0" Feb 21 00:16:20 crc kubenswrapper[4730]: I0221 00:16:20.008025 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xfbv"] Feb 21 00:16:20 crc kubenswrapper[4730]: I0221 00:16:20.013417 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xfbv"] Feb 21 00:16:20 crc kubenswrapper[4730]: I0221 00:16:20.703863 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" path="/var/lib/kubelet/pods/9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067/volumes" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.318562 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9"] Feb 21 00:16:22 crc kubenswrapper[4730]: E0221 00:16:22.318994 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" containerName="extract-content" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.319026 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" containerName="extract-content" Feb 21 00:16:22 crc kubenswrapper[4730]: E0221 00:16:22.319061 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" containerName="extract-utilities" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.319079 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" containerName="extract-utilities" Feb 21 00:16:22 crc kubenswrapper[4730]: E0221 00:16:22.319110 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" containerName="registry-server" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.319127 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" containerName="registry-server" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.319350 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7fcfd3-5996-44fa-8ff0-54f6c7fcc067" containerName="registry-server" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.320712 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.328078 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9"] Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.328454 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.342290 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9\" (UID: \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.342692 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9\" (UID: \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.342725 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shz4c\" (UniqueName: \"kubernetes.io/projected/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-kube-api-access-shz4c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9\" (UID: \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.444081 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9\" (UID: \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.444178 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9\" (UID: \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.444229 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shz4c\" (UniqueName: \"kubernetes.io/projected/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-kube-api-access-shz4c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9\" (UID: \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.445178 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9\" (UID: \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.445553 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9\" (UID: \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.468274 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shz4c\" (UniqueName: \"kubernetes.io/projected/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-kube-api-access-shz4c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9\" (UID: \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.646887 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" Feb 21 00:16:22 crc kubenswrapper[4730]: I0221 00:16:22.931430 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9"] Feb 21 00:16:23 crc kubenswrapper[4730]: I0221 00:16:23.936124 4730 generic.go:334] "Generic (PLEG): container finished" podID="53644c60-af55-4aa2-8ccc-94b0dbb5e4f5" containerID="900f96da7db9ffe388b6eb84988a32dffd7b5df90b6a394c3d694b3280b939ef" exitCode=0 Feb 21 00:16:23 crc kubenswrapper[4730]: I0221 00:16:23.936161 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" event={"ID":"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5","Type":"ContainerDied","Data":"900f96da7db9ffe388b6eb84988a32dffd7b5df90b6a394c3d694b3280b939ef"} Feb 21 00:16:23 crc kubenswrapper[4730]: I0221 00:16:23.936185 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" event={"ID":"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5","Type":"ContainerStarted","Data":"6bbf1b33a0793d76b46521eca694ea05a69c863ac09788806654652b29eaeb47"} Feb 21 00:16:23 crc kubenswrapper[4730]: I0221 00:16:23.938879 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 00:16:24 crc kubenswrapper[4730]: I0221 00:16:24.323740 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:16:24 crc kubenswrapper[4730]: I0221 00:16:24.323812 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:16:24 crc kubenswrapper[4730]: I0221 00:16:24.323866 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:16:24 crc kubenswrapper[4730]: I0221 00:16:24.324594 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24a66c9695cdd5120edbf23d4909db11bb0c6c079f6a9c66eb1e643203703abe"} pod="openshift-machine-config-operator/machine-config-daemon-plgd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 00:16:24 crc kubenswrapper[4730]: I0221 00:16:24.324684 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" containerID="cri-o://24a66c9695cdd5120edbf23d4909db11bb0c6c079f6a9c66eb1e643203703abe" gracePeriod=600 Feb 21 00:16:24 crc kubenswrapper[4730]: I0221 00:16:24.945810 4730 generic.go:334] "Generic (PLEG): container finished" podID="7622a560-9120-4202-b95a-246a806fe889" containerID="24a66c9695cdd5120edbf23d4909db11bb0c6c079f6a9c66eb1e643203703abe" exitCode=0 Feb 21 00:16:24 crc kubenswrapper[4730]: I0221 00:16:24.945902 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerDied","Data":"24a66c9695cdd5120edbf23d4909db11bb0c6c079f6a9c66eb1e643203703abe"} Feb 21 00:16:24 crc kubenswrapper[4730]: I0221 00:16:24.948181 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerStarted","Data":"e71ad00d652cbb89b58230b98021feaed326da2f441dcb2233defaa9931e5127"} Feb 21 00:16:24 crc kubenswrapper[4730]: I0221 00:16:24.948252 4730 scope.go:117] "RemoveContainer" containerID="123bfe2acdbe9e91356587500e55b8ee65695af687bb95717966abf26e1256ed" Feb 21 00:16:25 crc kubenswrapper[4730]: I0221 00:16:25.955333 4730 generic.go:334] "Generic (PLEG): container finished" podID="53644c60-af55-4aa2-8ccc-94b0dbb5e4f5" containerID="643ff69273d617495da1eb4304de58ad0e4c78d6811745191b5f17c37f6429ff" exitCode=0 Feb 21 00:16:25 crc kubenswrapper[4730]: I0221 00:16:25.955438 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" event={"ID":"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5","Type":"ContainerDied","Data":"643ff69273d617495da1eb4304de58ad0e4c78d6811745191b5f17c37f6429ff"} Feb 21 00:16:26 crc kubenswrapper[4730]: I0221 00:16:26.963261 4730 generic.go:334] "Generic (PLEG): container finished" podID="53644c60-af55-4aa2-8ccc-94b0dbb5e4f5" containerID="90f46ba34e3b41eb791f50a4af453565c82bcf31c274aff7f2981488feac1f12" exitCode=0 Feb 21 00:16:26 crc kubenswrapper[4730]: I0221 00:16:26.963352 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" event={"ID":"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5","Type":"ContainerDied","Data":"90f46ba34e3b41eb791f50a4af453565c82bcf31c274aff7f2981488feac1f12"} Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.251550 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.315441 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-bundle\") pod \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\" (UID: \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\") " Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.315599 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-util\") pod \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\" (UID: \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\") " Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.315667 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shz4c\" (UniqueName: \"kubernetes.io/projected/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-kube-api-access-shz4c\") pod \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\" (UID: \"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5\") " Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.318463 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-bundle" (OuterVolumeSpecName: "bundle") pod "53644c60-af55-4aa2-8ccc-94b0dbb5e4f5" (UID: "53644c60-af55-4aa2-8ccc-94b0dbb5e4f5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.322055 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-kube-api-access-shz4c" (OuterVolumeSpecName: "kube-api-access-shz4c") pod "53644c60-af55-4aa2-8ccc-94b0dbb5e4f5" (UID: "53644c60-af55-4aa2-8ccc-94b0dbb5e4f5"). InnerVolumeSpecName "kube-api-access-shz4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.417329 4730 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.417399 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shz4c\" (UniqueName: \"kubernetes.io/projected/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-kube-api-access-shz4c\") on node \"crc\" DevicePath \"\"" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.481426 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk"] Feb 21 00:16:28 crc kubenswrapper[4730]: E0221 00:16:28.481707 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53644c60-af55-4aa2-8ccc-94b0dbb5e4f5" containerName="pull" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.481720 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="53644c60-af55-4aa2-8ccc-94b0dbb5e4f5" containerName="pull" Feb 21 00:16:28 crc kubenswrapper[4730]: E0221 00:16:28.481741 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53644c60-af55-4aa2-8ccc-94b0dbb5e4f5" containerName="util" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.481748 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="53644c60-af55-4aa2-8ccc-94b0dbb5e4f5" containerName="util" Feb 21 00:16:28 crc kubenswrapper[4730]: E0221 00:16:28.481757 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53644c60-af55-4aa2-8ccc-94b0dbb5e4f5" containerName="extract" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.481764 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="53644c60-af55-4aa2-8ccc-94b0dbb5e4f5" containerName="extract" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.481861 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="53644c60-af55-4aa2-8ccc-94b0dbb5e4f5" containerName="extract" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.482673 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.486291 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-util" (OuterVolumeSpecName: "util") pod "53644c60-af55-4aa2-8ccc-94b0dbb5e4f5" (UID: "53644c60-af55-4aa2-8ccc-94b0dbb5e4f5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.493522 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk"] Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.518625 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk\" (UID: \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.518724 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk\" (UID: \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.518765 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkv94\" (UniqueName: \"kubernetes.io/projected/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-kube-api-access-nkv94\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk\" (UID: \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.518846 4730 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53644c60-af55-4aa2-8ccc-94b0dbb5e4f5-util\") on node \"crc\" DevicePath \"\"" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.619855 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkv94\" (UniqueName: \"kubernetes.io/projected/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-kube-api-access-nkv94\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk\" (UID: \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.620001 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk\" (UID: \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.620117 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk\" (UID: \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.620854 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk\" (UID: \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.620928 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk\" (UID: \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.643026 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkv94\" (UniqueName: \"kubernetes.io/projected/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-kube-api-access-nkv94\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk\" (UID: \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.804905 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.977711 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" event={"ID":"53644c60-af55-4aa2-8ccc-94b0dbb5e4f5","Type":"ContainerDied","Data":"6bbf1b33a0793d76b46521eca694ea05a69c863ac09788806654652b29eaeb47"} Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.977772 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bbf1b33a0793d76b46521eca694ea05a69c863ac09788806654652b29eaeb47" Feb 21 00:16:28 crc kubenswrapper[4730]: I0221 00:16:28.977799 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9" Feb 21 00:16:29 crc kubenswrapper[4730]: I0221 00:16:29.024637 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk"] Feb 21 00:16:29 crc kubenswrapper[4730]: W0221 00:16:29.031729 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaa1d8d3_2eb9_4be2_90e0_6e8bf9929c51.slice/crio-f1c5607bf255dddcc282cd2d2adf7e142ba97fe5cdc18d5a9f5dd65fe740a22e WatchSource:0}: Error finding container f1c5607bf255dddcc282cd2d2adf7e142ba97fe5cdc18d5a9f5dd65fe740a22e: Status 404 returned error can't find the container with id f1c5607bf255dddcc282cd2d2adf7e142ba97fe5cdc18d5a9f5dd65fe740a22e Feb 21 00:16:29 crc kubenswrapper[4730]: I0221 00:16:29.986791 4730 generic.go:334] "Generic (PLEG): container finished" podID="caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51" containerID="f9726e05524a73a4046b2ba07480760a0ec4bf2fdc1bb96b5fdf8faaf1aecc74" exitCode=0 Feb 21 00:16:29 crc kubenswrapper[4730]: I0221 00:16:29.986851 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" event={"ID":"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51","Type":"ContainerDied","Data":"f9726e05524a73a4046b2ba07480760a0ec4bf2fdc1bb96b5fdf8faaf1aecc74"} Feb 21 00:16:29 crc kubenswrapper[4730]: I0221 00:16:29.987135 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" event={"ID":"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51","Type":"ContainerStarted","Data":"f1c5607bf255dddcc282cd2d2adf7e142ba97fe5cdc18d5a9f5dd65fe740a22e"} Feb 21 00:16:31 crc kubenswrapper[4730]: I0221 00:16:31.997919 4730 generic.go:334] "Generic (PLEG): container finished" podID="caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51" containerID="b138ef9783310130649444632fd1d907163983ee66d00e750438c176ba98aba9" exitCode=0 Feb 21 00:16:31 crc kubenswrapper[4730]: I0221 00:16:31.997985 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" event={"ID":"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51","Type":"ContainerDied","Data":"b138ef9783310130649444632fd1d907163983ee66d00e750438c176ba98aba9"} Feb 21 00:16:33 crc kubenswrapper[4730]: I0221 00:16:33.005832 4730 generic.go:334] "Generic (PLEG): container finished" podID="caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51" containerID="d5514cf50995f47b2350fbe3e0bfb9d324501b559dc68bfdd309133cddc8fb7b" exitCode=0 Feb 21 00:16:33 crc kubenswrapper[4730]: I0221 00:16:33.005934 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" event={"ID":"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51","Type":"ContainerDied","Data":"d5514cf50995f47b2350fbe3e0bfb9d324501b559dc68bfdd309133cddc8fb7b"} Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.318322 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.385499 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkv94\" (UniqueName: \"kubernetes.io/projected/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-kube-api-access-nkv94\") pod \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\" (UID: \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\") " Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.385569 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-util\") pod \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\" (UID: \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\") " Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.385604 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-bundle\") pod \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\" (UID: \"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51\") " Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.386579 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-bundle" (OuterVolumeSpecName: "bundle") pod "caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51" (UID: "caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.394879 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-kube-api-access-nkv94" (OuterVolumeSpecName: "kube-api-access-nkv94") pod "caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51" (UID: "caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51"). InnerVolumeSpecName "kube-api-access-nkv94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.486553 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkv94\" (UniqueName: \"kubernetes.io/projected/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-kube-api-access-nkv94\") on node \"crc\" DevicePath \"\"" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.486579 4730 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.541333 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk"] Feb 21 00:16:34 crc kubenswrapper[4730]: E0221 00:16:34.541546 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51" containerName="util" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.541556 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51" containerName="util" Feb 21 00:16:34 crc kubenswrapper[4730]: E0221 00:16:34.541565 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51" containerName="extract" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.541571 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51" containerName="extract" Feb 21 00:16:34 crc kubenswrapper[4730]: E0221 00:16:34.541582 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51" containerName="pull" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.541588 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51" containerName="pull" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.541666 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51" containerName="extract" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.542316 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.560647 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk"] Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.587994 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee2681ce-cbfb-4563-a33b-3da5e5080efb-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk\" (UID: \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.588076 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee2681ce-cbfb-4563-a33b-3da5e5080efb-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk\" (UID: \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.588111 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89j4x\" (UniqueName: \"kubernetes.io/projected/ee2681ce-cbfb-4563-a33b-3da5e5080efb-kube-api-access-89j4x\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk\" (UID: \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.607809 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-util" (OuterVolumeSpecName: "util") pod "caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51" (UID: "caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.689146 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee2681ce-cbfb-4563-a33b-3da5e5080efb-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk\" (UID: \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.689201 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89j4x\" (UniqueName: \"kubernetes.io/projected/ee2681ce-cbfb-4563-a33b-3da5e5080efb-kube-api-access-89j4x\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk\" (UID: \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.689232 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee2681ce-cbfb-4563-a33b-3da5e5080efb-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk\" (UID: \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.689296 4730 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51-util\") on node \"crc\" DevicePath \"\"" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.689655 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee2681ce-cbfb-4563-a33b-3da5e5080efb-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk\" (UID: \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.689695 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee2681ce-cbfb-4563-a33b-3da5e5080efb-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk\" (UID: \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.743044 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89j4x\" (UniqueName: \"kubernetes.io/projected/ee2681ce-cbfb-4563-a33b-3da5e5080efb-kube-api-access-89j4x\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk\" (UID: \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" Feb 21 00:16:34 crc kubenswrapper[4730]: I0221 00:16:34.857791 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" Feb 21 00:16:35 crc kubenswrapper[4730]: I0221 00:16:35.017356 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" event={"ID":"caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51","Type":"ContainerDied","Data":"f1c5607bf255dddcc282cd2d2adf7e142ba97fe5cdc18d5a9f5dd65fe740a22e"} Feb 21 00:16:35 crc kubenswrapper[4730]: I0221 00:16:35.017652 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1c5607bf255dddcc282cd2d2adf7e142ba97fe5cdc18d5a9f5dd65fe740a22e" Feb 21 00:16:35 crc kubenswrapper[4730]: I0221 00:16:35.017380 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk" Feb 21 00:16:35 crc kubenswrapper[4730]: I0221 00:16:35.206376 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk"] Feb 21 00:16:35 crc kubenswrapper[4730]: W0221 00:16:35.215301 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee2681ce_cbfb_4563_a33b_3da5e5080efb.slice/crio-6ca8a185c4c45974736272359088e74725abf170e4f41f25d8fbac038683bc36 WatchSource:0}: Error finding container 6ca8a185c4c45974736272359088e74725abf170e4f41f25d8fbac038683bc36: Status 404 returned error can't find the container with id 6ca8a185c4c45974736272359088e74725abf170e4f41f25d8fbac038683bc36 Feb 21 00:16:36 crc kubenswrapper[4730]: I0221 00:16:36.022692 4730 generic.go:334] "Generic (PLEG): container finished" podID="ee2681ce-cbfb-4563-a33b-3da5e5080efb" containerID="2a5fefa856d4e7dc74665feb1562eaca7f2abeeec6f7a7f1b8b3eb16c3a6220e" exitCode=0 Feb 21 00:16:36 crc kubenswrapper[4730]: I0221 00:16:36.023020 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" event={"ID":"ee2681ce-cbfb-4563-a33b-3da5e5080efb","Type":"ContainerDied","Data":"2a5fefa856d4e7dc74665feb1562eaca7f2abeeec6f7a7f1b8b3eb16c3a6220e"} Feb 21 00:16:36 crc kubenswrapper[4730]: I0221 00:16:36.023044 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" event={"ID":"ee2681ce-cbfb-4563-a33b-3da5e5080efb","Type":"ContainerStarted","Data":"6ca8a185c4c45974736272359088e74725abf170e4f41f25d8fbac038683bc36"} Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.008898 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xq7rz"] Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.009975 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xq7rz" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.012562 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-2fct2" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.013122 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.015152 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.024062 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdb8h\" (UniqueName: \"kubernetes.io/projected/1a394e26-92a9-44ea-b193-b9862b976124-kube-api-access-jdb8h\") pod \"obo-prometheus-operator-68bc856cb9-xq7rz\" (UID: \"1a394e26-92a9-44ea-b193-b9862b976124\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xq7rz" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.026721 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xq7rz"] Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.125445 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdb8h\" (UniqueName: \"kubernetes.io/projected/1a394e26-92a9-44ea-b193-b9862b976124-kube-api-access-jdb8h\") pod \"obo-prometheus-operator-68bc856cb9-xq7rz\" (UID: \"1a394e26-92a9-44ea-b193-b9862b976124\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xq7rz" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.132458 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd"] Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.136273 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.141921 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.142177 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-f5b5z" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.156684 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdb8h\" (UniqueName: \"kubernetes.io/projected/1a394e26-92a9-44ea-b193-b9862b976124-kube-api-access-jdb8h\") pod \"obo-prometheus-operator-68bc856cb9-xq7rz\" (UID: \"1a394e26-92a9-44ea-b193-b9862b976124\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xq7rz" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.160622 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd"] Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.180914 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt"] Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.181824 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.197086 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt"] Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.226403 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/753fc1e6-7a24-4a3f-8379-da79db56db71-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-664c67d4-nkrpd\" (UID: \"753fc1e6-7a24-4a3f-8379-da79db56db71\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.226471 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/753fc1e6-7a24-4a3f-8379-da79db56db71-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-664c67d4-nkrpd\" (UID: \"753fc1e6-7a24-4a3f-8379-da79db56db71\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.226530 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5af548d1-94ce-4f85-b585-ff94f0e6fd62-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-664c67d4-rlrmt\" (UID: \"5af548d1-94ce-4f85-b585-ff94f0e6fd62\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.226559 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5af548d1-94ce-4f85-b585-ff94f0e6fd62-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-664c67d4-rlrmt\" (UID: \"5af548d1-94ce-4f85-b585-ff94f0e6fd62\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.243506 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-cs9wn"] Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.244160 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-cs9wn" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.247384 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-lctlj" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.247809 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.260023 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-cs9wn"] Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.327489 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz2mk\" (UniqueName: \"kubernetes.io/projected/eed8544f-c759-403e-a3ec-d4e2f2374c80-kube-api-access-lz2mk\") pod \"observability-operator-59bdc8b94-cs9wn\" (UID: \"eed8544f-c759-403e-a3ec-d4e2f2374c80\") " pod="openshift-operators/observability-operator-59bdc8b94-cs9wn" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.327554 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5af548d1-94ce-4f85-b585-ff94f0e6fd62-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-664c67d4-rlrmt\" (UID: \"5af548d1-94ce-4f85-b585-ff94f0e6fd62\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.327591 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5af548d1-94ce-4f85-b585-ff94f0e6fd62-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-664c67d4-rlrmt\" (UID: \"5af548d1-94ce-4f85-b585-ff94f0e6fd62\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.327634 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/eed8544f-c759-403e-a3ec-d4e2f2374c80-observability-operator-tls\") pod \"observability-operator-59bdc8b94-cs9wn\" (UID: \"eed8544f-c759-403e-a3ec-d4e2f2374c80\") " pod="openshift-operators/observability-operator-59bdc8b94-cs9wn" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.327735 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/753fc1e6-7a24-4a3f-8379-da79db56db71-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-664c67d4-nkrpd\" (UID: \"753fc1e6-7a24-4a3f-8379-da79db56db71\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.327786 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/753fc1e6-7a24-4a3f-8379-da79db56db71-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-664c67d4-nkrpd\" (UID: \"753fc1e6-7a24-4a3f-8379-da79db56db71\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.332201 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5af548d1-94ce-4f85-b585-ff94f0e6fd62-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-664c67d4-rlrmt\" (UID: \"5af548d1-94ce-4f85-b585-ff94f0e6fd62\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.332525 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/753fc1e6-7a24-4a3f-8379-da79db56db71-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-664c67d4-nkrpd\" (UID: \"753fc1e6-7a24-4a3f-8379-da79db56db71\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.341258 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/753fc1e6-7a24-4a3f-8379-da79db56db71-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-664c67d4-nkrpd\" (UID: \"753fc1e6-7a24-4a3f-8379-da79db56db71\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.354766 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5af548d1-94ce-4f85-b585-ff94f0e6fd62-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-664c67d4-rlrmt\" (UID: \"5af548d1-94ce-4f85-b585-ff94f0e6fd62\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.356132 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ptzt6"] Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.356854 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ptzt6" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.359152 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ptzt6"] Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.360639 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-2xxcv" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.374432 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xq7rz" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.428167 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7723fd91-2ebf-45d9-b8a5-07ed9281185a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ptzt6\" (UID: \"7723fd91-2ebf-45d9-b8a5-07ed9281185a\") " pod="openshift-operators/perses-operator-5bf474d74f-ptzt6" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.428238 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz2mk\" (UniqueName: \"kubernetes.io/projected/eed8544f-c759-403e-a3ec-d4e2f2374c80-kube-api-access-lz2mk\") pod \"observability-operator-59bdc8b94-cs9wn\" (UID: \"eed8544f-c759-403e-a3ec-d4e2f2374c80\") " pod="openshift-operators/observability-operator-59bdc8b94-cs9wn" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.428270 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/eed8544f-c759-403e-a3ec-d4e2f2374c80-observability-operator-tls\") pod \"observability-operator-59bdc8b94-cs9wn\" (UID: \"eed8544f-c759-403e-a3ec-d4e2f2374c80\") " pod="openshift-operators/observability-operator-59bdc8b94-cs9wn" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.428302 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpm85\" (UniqueName: \"kubernetes.io/projected/7723fd91-2ebf-45d9-b8a5-07ed9281185a-kube-api-access-tpm85\") pod \"perses-operator-5bf474d74f-ptzt6\" (UID: \"7723fd91-2ebf-45d9-b8a5-07ed9281185a\") " pod="openshift-operators/perses-operator-5bf474d74f-ptzt6" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.432713 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/eed8544f-c759-403e-a3ec-d4e2f2374c80-observability-operator-tls\") pod \"observability-operator-59bdc8b94-cs9wn\" (UID: \"eed8544f-c759-403e-a3ec-d4e2f2374c80\") " pod="openshift-operators/observability-operator-59bdc8b94-cs9wn" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.442270 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz2mk\" (UniqueName: \"kubernetes.io/projected/eed8544f-c759-403e-a3ec-d4e2f2374c80-kube-api-access-lz2mk\") pod \"observability-operator-59bdc8b94-cs9wn\" (UID: \"eed8544f-c759-403e-a3ec-d4e2f2374c80\") " pod="openshift-operators/observability-operator-59bdc8b94-cs9wn" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.478631 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.498895 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.529961 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpm85\" (UniqueName: \"kubernetes.io/projected/7723fd91-2ebf-45d9-b8a5-07ed9281185a-kube-api-access-tpm85\") pod \"perses-operator-5bf474d74f-ptzt6\" (UID: \"7723fd91-2ebf-45d9-b8a5-07ed9281185a\") " pod="openshift-operators/perses-operator-5bf474d74f-ptzt6" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.530017 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7723fd91-2ebf-45d9-b8a5-07ed9281185a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ptzt6\" (UID: \"7723fd91-2ebf-45d9-b8a5-07ed9281185a\") " pod="openshift-operators/perses-operator-5bf474d74f-ptzt6" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.530684 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7723fd91-2ebf-45d9-b8a5-07ed9281185a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ptzt6\" (UID: \"7723fd91-2ebf-45d9-b8a5-07ed9281185a\") " pod="openshift-operators/perses-operator-5bf474d74f-ptzt6" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.547500 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpm85\" (UniqueName: \"kubernetes.io/projected/7723fd91-2ebf-45d9-b8a5-07ed9281185a-kube-api-access-tpm85\") pod \"perses-operator-5bf474d74f-ptzt6\" (UID: \"7723fd91-2ebf-45d9-b8a5-07ed9281185a\") " pod="openshift-operators/perses-operator-5bf474d74f-ptzt6" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.573426 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-cs9wn" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.701767 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-2xxcv" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.710097 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ptzt6" Feb 21 00:16:38 crc kubenswrapper[4730]: I0221 00:16:38.984055 4730 scope.go:117] "RemoveContainer" containerID="141945e352de594bf3f76de73cfdbe4158bd65f375e20d134853f936ff8947f4" Feb 21 00:16:40 crc kubenswrapper[4730]: I0221 00:16:40.298062 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-cs9wn"] Feb 21 00:16:40 crc kubenswrapper[4730]: I0221 00:16:40.412850 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd"] Feb 21 00:16:40 crc kubenswrapper[4730]: I0221 00:16:40.469408 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xq7rz"] Feb 21 00:16:40 crc kubenswrapper[4730]: I0221 00:16:40.473807 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt"] Feb 21 00:16:40 crc kubenswrapper[4730]: I0221 00:16:40.531362 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ptzt6"] Feb 21 00:16:40 crc kubenswrapper[4730]: W0221 00:16:40.542350 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7723fd91_2ebf_45d9_b8a5_07ed9281185a.slice/crio-a34957dc8532d801edffc967a38de699ceaa0df7fd34cbfd46e4b65c09f27883 WatchSource:0}: Error finding container a34957dc8532d801edffc967a38de699ceaa0df7fd34cbfd46e4b65c09f27883: Status 404 returned error can't find the container with id a34957dc8532d801edffc967a38de699ceaa0df7fd34cbfd46e4b65c09f27883 Feb 21 00:16:41 crc kubenswrapper[4730]: I0221 00:16:41.049652 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt" event={"ID":"5af548d1-94ce-4f85-b585-ff94f0e6fd62","Type":"ContainerStarted","Data":"f8aa82e067b6666555eab165ffdaf1ba42e5ce73fbc82ec9174d8770e6c3ed3f"} Feb 21 00:16:41 crc kubenswrapper[4730]: I0221 00:16:41.051444 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-cs9wn" event={"ID":"eed8544f-c759-403e-a3ec-d4e2f2374c80","Type":"ContainerStarted","Data":"0231f7ca2e17e602d0f6594ef9ce45d0ce228af740f1a93d43c02353222139f6"} Feb 21 00:16:41 crc kubenswrapper[4730]: I0221 00:16:41.052746 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd" event={"ID":"753fc1e6-7a24-4a3f-8379-da79db56db71","Type":"ContainerStarted","Data":"4b8d6bd1bba4f31d71726a0473ba0e55535535d03ccfc84bf38f1704428a4523"} Feb 21 00:16:41 crc kubenswrapper[4730]: I0221 00:16:41.053878 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ptzt6" event={"ID":"7723fd91-2ebf-45d9-b8a5-07ed9281185a","Type":"ContainerStarted","Data":"a34957dc8532d801edffc967a38de699ceaa0df7fd34cbfd46e4b65c09f27883"} Feb 21 00:16:41 crc kubenswrapper[4730]: I0221 00:16:41.054822 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xq7rz" event={"ID":"1a394e26-92a9-44ea-b193-b9862b976124","Type":"ContainerStarted","Data":"8ba50749c3307c019e95707fff0b4fdc89f4667910e84e835f4699a24e1889e9"} Feb 21 00:16:41 crc kubenswrapper[4730]: I0221 00:16:41.056907 4730 generic.go:334] "Generic (PLEG): container finished" podID="ee2681ce-cbfb-4563-a33b-3da5e5080efb" containerID="cc9e41dcf1f25243960d36f1b5df5855e627ff39e4143a026db333ae7979b375" exitCode=0 Feb 21 00:16:41 crc kubenswrapper[4730]: I0221 00:16:41.056937 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" event={"ID":"ee2681ce-cbfb-4563-a33b-3da5e5080efb","Type":"ContainerDied","Data":"cc9e41dcf1f25243960d36f1b5df5855e627ff39e4143a026db333ae7979b375"} Feb 21 00:16:42 crc kubenswrapper[4730]: I0221 00:16:42.114699 4730 generic.go:334] "Generic (PLEG): container finished" podID="ee2681ce-cbfb-4563-a33b-3da5e5080efb" containerID="6b3ceb7fb7641845fce7ed3508c41e2df88aa8f766b616c2ad15030f7362fc1a" exitCode=0 Feb 21 00:16:42 crc kubenswrapper[4730]: I0221 00:16:42.115063 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" event={"ID":"ee2681ce-cbfb-4563-a33b-3da5e5080efb","Type":"ContainerDied","Data":"6b3ceb7fb7641845fce7ed3508c41e2df88aa8f766b616c2ad15030f7362fc1a"} Feb 21 00:16:43 crc kubenswrapper[4730]: I0221 00:16:43.370148 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" Feb 21 00:16:43 crc kubenswrapper[4730]: I0221 00:16:43.409116 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee2681ce-cbfb-4563-a33b-3da5e5080efb-bundle\") pod \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\" (UID: \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\") " Feb 21 00:16:43 crc kubenswrapper[4730]: I0221 00:16:43.409150 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89j4x\" (UniqueName: \"kubernetes.io/projected/ee2681ce-cbfb-4563-a33b-3da5e5080efb-kube-api-access-89j4x\") pod \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\" (UID: \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\") " Feb 21 00:16:43 crc kubenswrapper[4730]: I0221 00:16:43.409236 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee2681ce-cbfb-4563-a33b-3da5e5080efb-util\") pod \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\" (UID: \"ee2681ce-cbfb-4563-a33b-3da5e5080efb\") " Feb 21 00:16:43 crc kubenswrapper[4730]: I0221 00:16:43.411416 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee2681ce-cbfb-4563-a33b-3da5e5080efb-bundle" (OuterVolumeSpecName: "bundle") pod "ee2681ce-cbfb-4563-a33b-3da5e5080efb" (UID: "ee2681ce-cbfb-4563-a33b-3da5e5080efb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:16:43 crc kubenswrapper[4730]: I0221 00:16:43.419508 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee2681ce-cbfb-4563-a33b-3da5e5080efb-util" (OuterVolumeSpecName: "util") pod "ee2681ce-cbfb-4563-a33b-3da5e5080efb" (UID: "ee2681ce-cbfb-4563-a33b-3da5e5080efb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:16:43 crc kubenswrapper[4730]: I0221 00:16:43.424210 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee2681ce-cbfb-4563-a33b-3da5e5080efb-kube-api-access-89j4x" (OuterVolumeSpecName: "kube-api-access-89j4x") pod "ee2681ce-cbfb-4563-a33b-3da5e5080efb" (UID: "ee2681ce-cbfb-4563-a33b-3da5e5080efb"). InnerVolumeSpecName "kube-api-access-89j4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:16:43 crc kubenswrapper[4730]: I0221 00:16:43.510620 4730 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee2681ce-cbfb-4563-a33b-3da5e5080efb-util\") on node \"crc\" DevicePath \"\"" Feb 21 00:16:43 crc kubenswrapper[4730]: I0221 00:16:43.510648 4730 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee2681ce-cbfb-4563-a33b-3da5e5080efb-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 00:16:43 crc kubenswrapper[4730]: I0221 00:16:43.510658 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89j4x\" (UniqueName: \"kubernetes.io/projected/ee2681ce-cbfb-4563-a33b-3da5e5080efb-kube-api-access-89j4x\") on node \"crc\" DevicePath \"\"" Feb 21 00:16:44 crc kubenswrapper[4730]: I0221 00:16:44.138547 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" event={"ID":"ee2681ce-cbfb-4563-a33b-3da5e5080efb","Type":"ContainerDied","Data":"6ca8a185c4c45974736272359088e74725abf170e4f41f25d8fbac038683bc36"} Feb 21 00:16:44 crc kubenswrapper[4730]: I0221 00:16:44.138586 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ca8a185c4c45974736272359088e74725abf170e4f41f25d8fbac038683bc36" Feb 21 00:16:44 crc kubenswrapper[4730]: I0221 00:16:44.138642 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.466762 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-6b585df86-c6vld"] Feb 21 00:16:45 crc kubenswrapper[4730]: E0221 00:16:45.467257 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee2681ce-cbfb-4563-a33b-3da5e5080efb" containerName="pull" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.467269 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee2681ce-cbfb-4563-a33b-3da5e5080efb" containerName="pull" Feb 21 00:16:45 crc kubenswrapper[4730]: E0221 00:16:45.467281 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee2681ce-cbfb-4563-a33b-3da5e5080efb" containerName="util" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.467287 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee2681ce-cbfb-4563-a33b-3da5e5080efb" containerName="util" Feb 21 00:16:45 crc kubenswrapper[4730]: E0221 00:16:45.467295 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee2681ce-cbfb-4563-a33b-3da5e5080efb" containerName="extract" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.467301 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee2681ce-cbfb-4563-a33b-3da5e5080efb" containerName="extract" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.467383 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee2681ce-cbfb-4563-a33b-3da5e5080efb" containerName="extract" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.467742 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6b585df86-c6vld" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.470181 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.471139 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.471351 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.478833 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-lnj7x" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.533734 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6b585df86-c6vld"] Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.541394 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvvcz\" (UniqueName: \"kubernetes.io/projected/9bdd20e2-2630-40b0-8214-7c293ba12912-kube-api-access-lvvcz\") pod \"elastic-operator-6b585df86-c6vld\" (UID: \"9bdd20e2-2630-40b0-8214-7c293ba12912\") " pod="service-telemetry/elastic-operator-6b585df86-c6vld" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.541434 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9bdd20e2-2630-40b0-8214-7c293ba12912-webhook-cert\") pod \"elastic-operator-6b585df86-c6vld\" (UID: \"9bdd20e2-2630-40b0-8214-7c293ba12912\") " pod="service-telemetry/elastic-operator-6b585df86-c6vld" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.541476 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9bdd20e2-2630-40b0-8214-7c293ba12912-apiservice-cert\") pod \"elastic-operator-6b585df86-c6vld\" (UID: \"9bdd20e2-2630-40b0-8214-7c293ba12912\") " pod="service-telemetry/elastic-operator-6b585df86-c6vld" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.642860 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9bdd20e2-2630-40b0-8214-7c293ba12912-apiservice-cert\") pod \"elastic-operator-6b585df86-c6vld\" (UID: \"9bdd20e2-2630-40b0-8214-7c293ba12912\") " pod="service-telemetry/elastic-operator-6b585df86-c6vld" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.642938 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvvcz\" (UniqueName: \"kubernetes.io/projected/9bdd20e2-2630-40b0-8214-7c293ba12912-kube-api-access-lvvcz\") pod \"elastic-operator-6b585df86-c6vld\" (UID: \"9bdd20e2-2630-40b0-8214-7c293ba12912\") " pod="service-telemetry/elastic-operator-6b585df86-c6vld" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.642977 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9bdd20e2-2630-40b0-8214-7c293ba12912-webhook-cert\") pod \"elastic-operator-6b585df86-c6vld\" (UID: \"9bdd20e2-2630-40b0-8214-7c293ba12912\") " pod="service-telemetry/elastic-operator-6b585df86-c6vld" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.648297 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9bdd20e2-2630-40b0-8214-7c293ba12912-webhook-cert\") pod \"elastic-operator-6b585df86-c6vld\" (UID: \"9bdd20e2-2630-40b0-8214-7c293ba12912\") " pod="service-telemetry/elastic-operator-6b585df86-c6vld" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.648297 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9bdd20e2-2630-40b0-8214-7c293ba12912-apiservice-cert\") pod \"elastic-operator-6b585df86-c6vld\" (UID: \"9bdd20e2-2630-40b0-8214-7c293ba12912\") " pod="service-telemetry/elastic-operator-6b585df86-c6vld" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.657143 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvvcz\" (UniqueName: \"kubernetes.io/projected/9bdd20e2-2630-40b0-8214-7c293ba12912-kube-api-access-lvvcz\") pod \"elastic-operator-6b585df86-c6vld\" (UID: \"9bdd20e2-2630-40b0-8214-7c293ba12912\") " pod="service-telemetry/elastic-operator-6b585df86-c6vld" Feb 21 00:16:45 crc kubenswrapper[4730]: I0221 00:16:45.781308 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6b585df86-c6vld" Feb 21 00:16:52 crc kubenswrapper[4730]: I0221 00:16:52.970532 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6b585df86-c6vld"] Feb 21 00:16:52 crc kubenswrapper[4730]: W0221 00:16:52.985326 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bdd20e2_2630_40b0_8214_7c293ba12912.slice/crio-f38a141b380f18a2734e92c93a0854b0668a606cdda961aaf7ceb10ab866ff90 WatchSource:0}: Error finding container f38a141b380f18a2734e92c93a0854b0668a606cdda961aaf7ceb10ab866ff90: Status 404 returned error can't find the container with id f38a141b380f18a2734e92c93a0854b0668a606cdda961aaf7ceb10ab866ff90 Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.191535 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd" event={"ID":"753fc1e6-7a24-4a3f-8379-da79db56db71","Type":"ContainerStarted","Data":"b60cc3cce7a8a0608155b4b85ac5dc4b16399e8f3bfb61c976eab771269cbd85"} Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.193543 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ptzt6" event={"ID":"7723fd91-2ebf-45d9-b8a5-07ed9281185a","Type":"ContainerStarted","Data":"2682ecb59bcfa260dbbefc9d98d12191320e2d8620cabd9570ede6d7f02a6975"} Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.193727 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-ptzt6" Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.195150 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt" event={"ID":"5af548d1-94ce-4f85-b585-ff94f0e6fd62","Type":"ContainerStarted","Data":"ac404bc704b264a7e476d58632f64100fd0cbbaa38d0e05b32e687f74b9fce7c"} Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.196441 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xq7rz" event={"ID":"1a394e26-92a9-44ea-b193-b9862b976124","Type":"ContainerStarted","Data":"30f52f168b853f95d751c66b8b18e2f1f354019215bc1726b667f2026ac3bca2"} Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.198066 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-cs9wn" event={"ID":"eed8544f-c759-403e-a3ec-d4e2f2374c80","Type":"ContainerStarted","Data":"8a93121d0fe2167e6d69af444dcdca84a365f07fc0b22191e5874494a8f7aa3b"} Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.198285 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-cs9wn" Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.199022 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6b585df86-c6vld" event={"ID":"9bdd20e2-2630-40b0-8214-7c293ba12912","Type":"ContainerStarted","Data":"f38a141b380f18a2734e92c93a0854b0668a606cdda961aaf7ceb10ab866ff90"} Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.209282 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-nkrpd" podStartSLOduration=3.048594571 podStartE2EDuration="15.209267444s" podCreationTimestamp="2026-02-21 00:16:38 +0000 UTC" firstStartedPulling="2026-02-21 00:16:40.424200405 +0000 UTC m=+602.435767340" lastFinishedPulling="2026-02-21 00:16:52.584873278 +0000 UTC m=+614.596440213" observedRunningTime="2026-02-21 00:16:53.20669619 +0000 UTC m=+615.218263125" watchObservedRunningTime="2026-02-21 00:16:53.209267444 +0000 UTC m=+615.220834379" Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.215441 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-cs9wn" Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.232470 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-664c67d4-rlrmt" podStartSLOduration=3.147588663 podStartE2EDuration="15.232300512s" podCreationTimestamp="2026-02-21 00:16:38 +0000 UTC" firstStartedPulling="2026-02-21 00:16:40.515391935 +0000 UTC m=+602.526958870" lastFinishedPulling="2026-02-21 00:16:52.600103784 +0000 UTC m=+614.611670719" observedRunningTime="2026-02-21 00:16:53.227052902 +0000 UTC m=+615.238619847" watchObservedRunningTime="2026-02-21 00:16:53.232300512 +0000 UTC m=+615.243867447" Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.249148 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-ptzt6" podStartSLOduration=3.194427059 podStartE2EDuration="15.249133277s" podCreationTimestamp="2026-02-21 00:16:38 +0000 UTC" firstStartedPulling="2026-02-21 00:16:40.544155005 +0000 UTC m=+602.555721940" lastFinishedPulling="2026-02-21 00:16:52.598861223 +0000 UTC m=+614.610428158" observedRunningTime="2026-02-21 00:16:53.245370264 +0000 UTC m=+615.256937199" watchObservedRunningTime="2026-02-21 00:16:53.249133277 +0000 UTC m=+615.260700212" Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.305883 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xq7rz" podStartSLOduration=4.200581152 podStartE2EDuration="16.305865938s" podCreationTimestamp="2026-02-21 00:16:37 +0000 UTC" firstStartedPulling="2026-02-21 00:16:40.515434266 +0000 UTC m=+602.527001201" lastFinishedPulling="2026-02-21 00:16:52.620719052 +0000 UTC m=+614.632285987" observedRunningTime="2026-02-21 00:16:53.276161184 +0000 UTC m=+615.287728119" watchObservedRunningTime="2026-02-21 00:16:53.305865938 +0000 UTC m=+615.317432873" Feb 21 00:16:53 crc kubenswrapper[4730]: I0221 00:16:53.306045 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-cs9wn" podStartSLOduration=3.009714212 podStartE2EDuration="15.306039742s" podCreationTimestamp="2026-02-21 00:16:38 +0000 UTC" firstStartedPulling="2026-02-21 00:16:40.351408869 +0000 UTC m=+602.362975794" lastFinishedPulling="2026-02-21 00:16:52.647734389 +0000 UTC m=+614.659301324" observedRunningTime="2026-02-21 00:16:53.301221562 +0000 UTC m=+615.312788507" watchObservedRunningTime="2026-02-21 00:16:53.306039742 +0000 UTC m=+615.317606687" Feb 21 00:16:57 crc kubenswrapper[4730]: I0221 00:16:57.219121 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6b585df86-c6vld" event={"ID":"9bdd20e2-2630-40b0-8214-7c293ba12912","Type":"ContainerStarted","Data":"d125c755b2c31f09a9fc137f84c8911386f2b075e42286666b835fb536cc3b1c"} Feb 21 00:16:57 crc kubenswrapper[4730]: I0221 00:16:57.240331 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-6b585df86-c6vld" podStartSLOduration=9.019390393 podStartE2EDuration="12.240313173s" podCreationTimestamp="2026-02-21 00:16:45 +0000 UTC" firstStartedPulling="2026-02-21 00:16:52.99225281 +0000 UTC m=+615.003819735" lastFinishedPulling="2026-02-21 00:16:56.21317558 +0000 UTC m=+618.224742515" observedRunningTime="2026-02-21 00:16:57.237756899 +0000 UTC m=+619.249323834" watchObservedRunningTime="2026-02-21 00:16:57.240313173 +0000 UTC m=+619.251880108" Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.614937 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55"] Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.616126 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55" Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.619411 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.619703 4730 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-xdpfh" Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.621208 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.628547 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55"] Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.712166 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-ptzt6" Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.717415 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/049465f8-b9a5-4ca6-9d80-532ce288c904-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-sdf55\" (UID: \"049465f8-b9a5-4ca6-9d80-532ce288c904\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55" Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.717543 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vjcz\" (UniqueName: \"kubernetes.io/projected/049465f8-b9a5-4ca6-9d80-532ce288c904-kube-api-access-5vjcz\") pod \"cert-manager-operator-controller-manager-5586865c96-sdf55\" (UID: \"049465f8-b9a5-4ca6-9d80-532ce288c904\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55" Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.818328 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vjcz\" (UniqueName: \"kubernetes.io/projected/049465f8-b9a5-4ca6-9d80-532ce288c904-kube-api-access-5vjcz\") pod \"cert-manager-operator-controller-manager-5586865c96-sdf55\" (UID: \"049465f8-b9a5-4ca6-9d80-532ce288c904\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55" Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.818401 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/049465f8-b9a5-4ca6-9d80-532ce288c904-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-sdf55\" (UID: \"049465f8-b9a5-4ca6-9d80-532ce288c904\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55" Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.818805 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/049465f8-b9a5-4ca6-9d80-532ce288c904-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-sdf55\" (UID: \"049465f8-b9a5-4ca6-9d80-532ce288c904\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55" Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.840227 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vjcz\" (UniqueName: \"kubernetes.io/projected/049465f8-b9a5-4ca6-9d80-532ce288c904-kube-api-access-5vjcz\") pod \"cert-manager-operator-controller-manager-5586865c96-sdf55\" (UID: \"049465f8-b9a5-4ca6-9d80-532ce288c904\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55" Feb 21 00:16:58 crc kubenswrapper[4730]: I0221 00:16:58.970960 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55" Feb 21 00:16:59 crc kubenswrapper[4730]: I0221 00:16:59.324114 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55"] Feb 21 00:17:00 crc kubenswrapper[4730]: I0221 00:17:00.239007 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55" event={"ID":"049465f8-b9a5-4ca6-9d80-532ce288c904","Type":"ContainerStarted","Data":"e7c0c52b1598a48e8ab1fedcb7b5385f1bb5e7cddf5cc462a6dc159296727d11"} Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.256454 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55" event={"ID":"049465f8-b9a5-4ca6-9d80-532ce288c904","Type":"ContainerStarted","Data":"cd6677ddcc7b90fe8b8bac9f1b65b41c88244ebafae26b483b2ecc96b2fccd99"} Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.285727 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-sdf55" podStartSLOduration=2.422285702 podStartE2EDuration="5.285697901s" podCreationTimestamp="2026-02-21 00:16:58 +0000 UTC" firstStartedPulling="2026-02-21 00:16:59.335268072 +0000 UTC m=+621.346835007" lastFinishedPulling="2026-02-21 00:17:02.198680271 +0000 UTC m=+624.210247206" observedRunningTime="2026-02-21 00:17:03.284982603 +0000 UTC m=+625.296549538" watchObservedRunningTime="2026-02-21 00:17:03.285697901 +0000 UTC m=+625.297264836" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.452924 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.453882 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.456731 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.458323 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.458326 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.458607 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.458847 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.458918 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.460705 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.460733 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.460742 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-kxfdw" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.472038 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585095 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585138 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585192 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585213 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585255 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585413 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/9cde0e02-5b39-4293-97db-651eb9f0e2aa-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585439 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585482 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585498 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585526 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585701 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585754 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585799 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585834 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.585856 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.686820 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.686866 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/9cde0e02-5b39-4293-97db-651eb9f0e2aa-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.686888 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.686915 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.686931 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.686964 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.686993 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687011 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687029 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687048 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687063 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687089 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687105 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687139 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687159 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687536 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687593 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687614 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687760 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687880 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.687968 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9cde0e02-5b39-4293-97db-651eb9f0e2aa-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.688212 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.688625 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.693213 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.696363 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.696416 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.697365 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.706414 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/9cde0e02-5b39-4293-97db-651eb9f0e2aa-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.706646 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.710587 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/9cde0e02-5b39-4293-97db-651eb9f0e2aa-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"9cde0e02-5b39-4293-97db-651eb9f0e2aa\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:03 crc kubenswrapper[4730]: I0221 00:17:03.768821 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:04 crc kubenswrapper[4730]: I0221 00:17:04.198111 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 21 00:17:04 crc kubenswrapper[4730]: W0221 00:17:04.204398 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cde0e02_5b39_4293_97db_651eb9f0e2aa.slice/crio-4eb99b4c0b0af80970988a7d41c768a4a8e898bc983fd7ba4f7c87408a0cf80e WatchSource:0}: Error finding container 4eb99b4c0b0af80970988a7d41c768a4a8e898bc983fd7ba4f7c87408a0cf80e: Status 404 returned error can't find the container with id 4eb99b4c0b0af80970988a7d41c768a4a8e898bc983fd7ba4f7c87408a0cf80e Feb 21 00:17:04 crc kubenswrapper[4730]: I0221 00:17:04.262895 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"9cde0e02-5b39-4293-97db-651eb9f0e2aa","Type":"ContainerStarted","Data":"4eb99b4c0b0af80970988a7d41c768a4a8e898bc983fd7ba4f7c87408a0cf80e"} Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.037739 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8dfl9"] Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.038827 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8dfl9" Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.044754 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.049441 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.051790 4730 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-qg89k" Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.056394 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8dfl9"] Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.200167 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dctlt\" (UniqueName: \"kubernetes.io/projected/b87a4216-583b-4228-a198-5c3a71ec9184-kube-api-access-dctlt\") pod \"cert-manager-webhook-6888856db4-8dfl9\" (UID: \"b87a4216-583b-4228-a198-5c3a71ec9184\") " pod="cert-manager/cert-manager-webhook-6888856db4-8dfl9" Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.200212 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b87a4216-583b-4228-a198-5c3a71ec9184-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8dfl9\" (UID: \"b87a4216-583b-4228-a198-5c3a71ec9184\") " pod="cert-manager/cert-manager-webhook-6888856db4-8dfl9" Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.300700 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dctlt\" (UniqueName: \"kubernetes.io/projected/b87a4216-583b-4228-a198-5c3a71ec9184-kube-api-access-dctlt\") pod \"cert-manager-webhook-6888856db4-8dfl9\" (UID: \"b87a4216-583b-4228-a198-5c3a71ec9184\") " pod="cert-manager/cert-manager-webhook-6888856db4-8dfl9" Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.300748 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b87a4216-583b-4228-a198-5c3a71ec9184-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8dfl9\" (UID: \"b87a4216-583b-4228-a198-5c3a71ec9184\") " pod="cert-manager/cert-manager-webhook-6888856db4-8dfl9" Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.323585 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dctlt\" (UniqueName: \"kubernetes.io/projected/b87a4216-583b-4228-a198-5c3a71ec9184-kube-api-access-dctlt\") pod \"cert-manager-webhook-6888856db4-8dfl9\" (UID: \"b87a4216-583b-4228-a198-5c3a71ec9184\") " pod="cert-manager/cert-manager-webhook-6888856db4-8dfl9" Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.327442 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b87a4216-583b-4228-a198-5c3a71ec9184-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8dfl9\" (UID: \"b87a4216-583b-4228-a198-5c3a71ec9184\") " pod="cert-manager/cert-manager-webhook-6888856db4-8dfl9" Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.354048 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8dfl9" Feb 21 00:17:11 crc kubenswrapper[4730]: I0221 00:17:11.599879 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8dfl9"] Feb 21 00:17:12 crc kubenswrapper[4730]: I0221 00:17:12.314497 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8dfl9" event={"ID":"b87a4216-583b-4228-a198-5c3a71ec9184","Type":"ContainerStarted","Data":"2b174245ef4ff433755ff528548bbf1206fc2120fae7b340dc712006444caf31"} Feb 21 00:17:16 crc kubenswrapper[4730]: I0221 00:17:16.293005 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-jtkvs"] Feb 21 00:17:16 crc kubenswrapper[4730]: I0221 00:17:16.295332 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-jtkvs" Feb 21 00:17:16 crc kubenswrapper[4730]: I0221 00:17:16.297911 4730 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vbftk" Feb 21 00:17:16 crc kubenswrapper[4730]: I0221 00:17:16.308437 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-jtkvs"] Feb 21 00:17:16 crc kubenswrapper[4730]: I0221 00:17:16.465595 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsrv6\" (UniqueName: \"kubernetes.io/projected/611d7c89-5596-472d-8778-3724f0b2f5ea-kube-api-access-wsrv6\") pod \"cert-manager-cainjector-5545bd876-jtkvs\" (UID: \"611d7c89-5596-472d-8778-3724f0b2f5ea\") " pod="cert-manager/cert-manager-cainjector-5545bd876-jtkvs" Feb 21 00:17:16 crc kubenswrapper[4730]: I0221 00:17:16.465653 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/611d7c89-5596-472d-8778-3724f0b2f5ea-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-jtkvs\" (UID: \"611d7c89-5596-472d-8778-3724f0b2f5ea\") " pod="cert-manager/cert-manager-cainjector-5545bd876-jtkvs" Feb 21 00:17:16 crc kubenswrapper[4730]: I0221 00:17:16.566876 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsrv6\" (UniqueName: \"kubernetes.io/projected/611d7c89-5596-472d-8778-3724f0b2f5ea-kube-api-access-wsrv6\") pod \"cert-manager-cainjector-5545bd876-jtkvs\" (UID: \"611d7c89-5596-472d-8778-3724f0b2f5ea\") " pod="cert-manager/cert-manager-cainjector-5545bd876-jtkvs" Feb 21 00:17:16 crc kubenswrapper[4730]: I0221 00:17:16.566932 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/611d7c89-5596-472d-8778-3724f0b2f5ea-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-jtkvs\" (UID: \"611d7c89-5596-472d-8778-3724f0b2f5ea\") " pod="cert-manager/cert-manager-cainjector-5545bd876-jtkvs" Feb 21 00:17:16 crc kubenswrapper[4730]: I0221 00:17:16.586493 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/611d7c89-5596-472d-8778-3724f0b2f5ea-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-jtkvs\" (UID: \"611d7c89-5596-472d-8778-3724f0b2f5ea\") " pod="cert-manager/cert-manager-cainjector-5545bd876-jtkvs" Feb 21 00:17:16 crc kubenswrapper[4730]: I0221 00:17:16.586612 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsrv6\" (UniqueName: \"kubernetes.io/projected/611d7c89-5596-472d-8778-3724f0b2f5ea-kube-api-access-wsrv6\") pod \"cert-manager-cainjector-5545bd876-jtkvs\" (UID: \"611d7c89-5596-472d-8778-3724f0b2f5ea\") " pod="cert-manager/cert-manager-cainjector-5545bd876-jtkvs" Feb 21 00:17:16 crc kubenswrapper[4730]: I0221 00:17:16.612784 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-jtkvs" Feb 21 00:17:24 crc kubenswrapper[4730]: I0221 00:17:24.807126 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-vb7r6"] Feb 21 00:17:24 crc kubenswrapper[4730]: I0221 00:17:24.808580 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-vb7r6" Feb 21 00:17:24 crc kubenswrapper[4730]: I0221 00:17:24.813305 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-vb7r6"] Feb 21 00:17:24 crc kubenswrapper[4730]: I0221 00:17:24.814384 4730 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fx2jd" Feb 21 00:17:24 crc kubenswrapper[4730]: I0221 00:17:24.988173 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgzrz\" (UniqueName: \"kubernetes.io/projected/dba1903b-ca71-407e-a9f2-4cb9932a636d-kube-api-access-cgzrz\") pod \"cert-manager-545d4d4674-vb7r6\" (UID: \"dba1903b-ca71-407e-a9f2-4cb9932a636d\") " pod="cert-manager/cert-manager-545d4d4674-vb7r6" Feb 21 00:17:24 crc kubenswrapper[4730]: I0221 00:17:24.988544 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dba1903b-ca71-407e-a9f2-4cb9932a636d-bound-sa-token\") pod \"cert-manager-545d4d4674-vb7r6\" (UID: \"dba1903b-ca71-407e-a9f2-4cb9932a636d\") " pod="cert-manager/cert-manager-545d4d4674-vb7r6" Feb 21 00:17:25 crc kubenswrapper[4730]: I0221 00:17:25.089673 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dba1903b-ca71-407e-a9f2-4cb9932a636d-bound-sa-token\") pod \"cert-manager-545d4d4674-vb7r6\" (UID: \"dba1903b-ca71-407e-a9f2-4cb9932a636d\") " pod="cert-manager/cert-manager-545d4d4674-vb7r6" Feb 21 00:17:25 crc kubenswrapper[4730]: I0221 00:17:25.089795 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgzrz\" (UniqueName: \"kubernetes.io/projected/dba1903b-ca71-407e-a9f2-4cb9932a636d-kube-api-access-cgzrz\") pod \"cert-manager-545d4d4674-vb7r6\" (UID: \"dba1903b-ca71-407e-a9f2-4cb9932a636d\") " pod="cert-manager/cert-manager-545d4d4674-vb7r6" Feb 21 00:17:25 crc kubenswrapper[4730]: I0221 00:17:25.114891 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgzrz\" (UniqueName: \"kubernetes.io/projected/dba1903b-ca71-407e-a9f2-4cb9932a636d-kube-api-access-cgzrz\") pod \"cert-manager-545d4d4674-vb7r6\" (UID: \"dba1903b-ca71-407e-a9f2-4cb9932a636d\") " pod="cert-manager/cert-manager-545d4d4674-vb7r6" Feb 21 00:17:25 crc kubenswrapper[4730]: I0221 00:17:25.115448 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dba1903b-ca71-407e-a9f2-4cb9932a636d-bound-sa-token\") pod \"cert-manager-545d4d4674-vb7r6\" (UID: \"dba1903b-ca71-407e-a9f2-4cb9932a636d\") " pod="cert-manager/cert-manager-545d4d4674-vb7r6" Feb 21 00:17:25 crc kubenswrapper[4730]: I0221 00:17:25.132486 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-vb7r6" Feb 21 00:17:25 crc kubenswrapper[4730]: I0221 00:17:25.551413 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-jtkvs"] Feb 21 00:17:25 crc kubenswrapper[4730]: W0221 00:17:25.997226 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod611d7c89_5596_472d_8778_3724f0b2f5ea.slice/crio-0b0862ae94b857bc307489850d0cb5445f832ce8ef6525f8f910405f89213a88 WatchSource:0}: Error finding container 0b0862ae94b857bc307489850d0cb5445f832ce8ef6525f8f910405f89213a88: Status 404 returned error can't find the container with id 0b0862ae94b857bc307489850d0cb5445f832ce8ef6525f8f910405f89213a88 Feb 21 00:17:26 crc kubenswrapper[4730]: E0221 00:17:26.186804 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Feb 21 00:17:26 crc kubenswrapper[4730]: E0221 00:17:26.187302 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(9cde0e02-5b39-4293-97db-651eb9f0e2aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 00:17:26 crc kubenswrapper[4730]: E0221 00:17:26.189479 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="9cde0e02-5b39-4293-97db-651eb9f0e2aa" Feb 21 00:17:26 crc kubenswrapper[4730]: I0221 00:17:26.284638 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-vb7r6"] Feb 21 00:17:26 crc kubenswrapper[4730]: I0221 00:17:26.398764 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8dfl9" event={"ID":"b87a4216-583b-4228-a198-5c3a71ec9184","Type":"ContainerStarted","Data":"487b1e4b557b3b6d8701894f380bc8d96e7abe0d7860ca675b060e00710f521a"} Feb 21 00:17:26 crc kubenswrapper[4730]: I0221 00:17:26.399614 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-8dfl9" Feb 21 00:17:26 crc kubenswrapper[4730]: I0221 00:17:26.400914 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-jtkvs" event={"ID":"611d7c89-5596-472d-8778-3724f0b2f5ea","Type":"ContainerStarted","Data":"4944afb4fc9a510ed48ebce70d5bed855c07c7fc4b76faac5d9e97494eae8e0a"} Feb 21 00:17:26 crc kubenswrapper[4730]: I0221 00:17:26.400956 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-jtkvs" event={"ID":"611d7c89-5596-472d-8778-3724f0b2f5ea","Type":"ContainerStarted","Data":"0b0862ae94b857bc307489850d0cb5445f832ce8ef6525f8f910405f89213a88"} Feb 21 00:17:26 crc kubenswrapper[4730]: I0221 00:17:26.402773 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-vb7r6" event={"ID":"dba1903b-ca71-407e-a9f2-4cb9932a636d","Type":"ContainerStarted","Data":"2f4559cc5bc7347c766d419d07c2880b08576151e92d056e5e72707472c9838d"} Feb 21 00:17:26 crc kubenswrapper[4730]: E0221 00:17:26.403738 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="9cde0e02-5b39-4293-97db-651eb9f0e2aa" Feb 21 00:17:26 crc kubenswrapper[4730]: I0221 00:17:26.433031 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-jtkvs" podStartSLOduration=10.433014067 podStartE2EDuration="10.433014067s" podCreationTimestamp="2026-02-21 00:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:17:26.431934011 +0000 UTC m=+648.443500946" watchObservedRunningTime="2026-02-21 00:17:26.433014067 +0000 UTC m=+648.444581002" Feb 21 00:17:26 crc kubenswrapper[4730]: I0221 00:17:26.434846 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-8dfl9" podStartSLOduration=0.970968305 podStartE2EDuration="15.434840203s" podCreationTimestamp="2026-02-21 00:17:11 +0000 UTC" firstStartedPulling="2026-02-21 00:17:11.621214265 +0000 UTC m=+633.632781190" lastFinishedPulling="2026-02-21 00:17:26.085086113 +0000 UTC m=+648.096653088" observedRunningTime="2026-02-21 00:17:26.420215012 +0000 UTC m=+648.431781947" watchObservedRunningTime="2026-02-21 00:17:26.434840203 +0000 UTC m=+648.446407138" Feb 21 00:17:26 crc kubenswrapper[4730]: I0221 00:17:26.604871 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 21 00:17:26 crc kubenswrapper[4730]: I0221 00:17:26.645829 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 21 00:17:27 crc kubenswrapper[4730]: I0221 00:17:27.409031 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-vb7r6" event={"ID":"dba1903b-ca71-407e-a9f2-4cb9932a636d","Type":"ContainerStarted","Data":"ed4365259b2e7bc96ae90db84dc5e65e595cf14949fdf41753cf42ae305c7bc7"} Feb 21 00:17:27 crc kubenswrapper[4730]: E0221 00:17:27.410444 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="9cde0e02-5b39-4293-97db-651eb9f0e2aa" Feb 21 00:17:27 crc kubenswrapper[4730]: I0221 00:17:27.457151 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-vb7r6" podStartSLOduration=3.457099325 podStartE2EDuration="3.457099325s" podCreationTimestamp="2026-02-21 00:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 00:17:27.45487555 +0000 UTC m=+649.466442495" watchObservedRunningTime="2026-02-21 00:17:27.457099325 +0000 UTC m=+649.468666270" Feb 21 00:17:28 crc kubenswrapper[4730]: E0221 00:17:28.420928 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="9cde0e02-5b39-4293-97db-651eb9f0e2aa" Feb 21 00:17:31 crc kubenswrapper[4730]: I0221 00:17:31.357241 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-8dfl9" Feb 21 00:17:44 crc kubenswrapper[4730]: I0221 00:17:44.524531 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"9cde0e02-5b39-4293-97db-651eb9f0e2aa","Type":"ContainerStarted","Data":"f6275f4e235fb99e3df855c59477bccc91226fc5d6982bcfec803b62e6ea21b0"} Feb 21 00:17:46 crc kubenswrapper[4730]: I0221 00:17:46.542817 4730 generic.go:334] "Generic (PLEG): container finished" podID="9cde0e02-5b39-4293-97db-651eb9f0e2aa" containerID="f6275f4e235fb99e3df855c59477bccc91226fc5d6982bcfec803b62e6ea21b0" exitCode=0 Feb 21 00:17:46 crc kubenswrapper[4730]: I0221 00:17:46.542868 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"9cde0e02-5b39-4293-97db-651eb9f0e2aa","Type":"ContainerDied","Data":"f6275f4e235fb99e3df855c59477bccc91226fc5d6982bcfec803b62e6ea21b0"} Feb 21 00:17:47 crc kubenswrapper[4730]: I0221 00:17:47.554059 4730 generic.go:334] "Generic (PLEG): container finished" podID="9cde0e02-5b39-4293-97db-651eb9f0e2aa" containerID="3e785adc56a61e4db6777f6f2a8fba4ba8766af1ae03eeffea2a1117aaea1915" exitCode=0 Feb 21 00:17:47 crc kubenswrapper[4730]: I0221 00:17:47.554280 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"9cde0e02-5b39-4293-97db-651eb9f0e2aa","Type":"ContainerDied","Data":"3e785adc56a61e4db6777f6f2a8fba4ba8766af1ae03eeffea2a1117aaea1915"} Feb 21 00:17:48 crc kubenswrapper[4730]: I0221 00:17:48.560652 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"9cde0e02-5b39-4293-97db-651eb9f0e2aa","Type":"ContainerStarted","Data":"ff02ab8d9874ae61f731101b131fa6f204d9f2400416af7c60ca1353ac7d21fb"} Feb 21 00:17:48 crc kubenswrapper[4730]: I0221 00:17:48.561672 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:17:48 crc kubenswrapper[4730]: I0221 00:17:48.612214 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=5.611961556 podStartE2EDuration="45.612199736s" podCreationTimestamp="2026-02-21 00:17:03 +0000 UTC" firstStartedPulling="2026-02-21 00:17:04.206521111 +0000 UTC m=+626.218088046" lastFinishedPulling="2026-02-21 00:17:44.206759251 +0000 UTC m=+666.218326226" observedRunningTime="2026-02-21 00:17:48.611825947 +0000 UTC m=+670.623392882" watchObservedRunningTime="2026-02-21 00:17:48.612199736 +0000 UTC m=+670.623766671" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.199906 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.202866 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.211603 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.211620 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.211727 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.211839 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-sm5t8" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.211888 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.231224 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.276780 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-builder-dockercfg-sm5t8-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.276827 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.276844 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.276864 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-builder-dockercfg-sm5t8-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.276887 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a39cc57b-1bd0-4926-8740-ff2d4bc34057-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.276908 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.276929 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.276965 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.276988 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.277016 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.277030 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a39cc57b-1bd0-4926-8740-ff2d4bc34057-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.277131 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.277217 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td67s\" (UniqueName: \"kubernetes.io/projected/a39cc57b-1bd0-4926-8740-ff2d4bc34057-kube-api-access-td67s\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.378676 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.378981 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.379057 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a39cc57b-1bd0-4926-8740-ff2d4bc34057-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.379131 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.379200 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td67s\" (UniqueName: \"kubernetes.io/projected/a39cc57b-1bd0-4926-8740-ff2d4bc34057-kube-api-access-td67s\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.379293 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-builder-dockercfg-sm5t8-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.379360 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.379421 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.379496 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-builder-dockercfg-sm5t8-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.379568 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a39cc57b-1bd0-4926-8740-ff2d4bc34057-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.379645 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.379721 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.379792 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.379834 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.380043 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.380176 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a39cc57b-1bd0-4926-8740-ff2d4bc34057-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.380250 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.379589 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.380559 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a39cc57b-1bd0-4926-8740-ff2d4bc34057-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.381180 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.382470 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.382654 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.384850 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-builder-dockercfg-sm5t8-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.386030 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.398553 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-builder-dockercfg-sm5t8-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.399643 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td67s\" (UniqueName: \"kubernetes.io/projected/a39cc57b-1bd0-4926-8740-ff2d4bc34057-kube-api-access-td67s\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.523200 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:17:56 crc kubenswrapper[4730]: I0221 00:17:56.942773 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 21 00:17:56 crc kubenswrapper[4730]: W0221 00:17:56.956733 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda39cc57b_1bd0_4926_8740_ff2d4bc34057.slice/crio-e71d8644296e87b880874b0ecf2b9e0c9d8e2f09120ad3080c6ec3f5db048820 WatchSource:0}: Error finding container e71d8644296e87b880874b0ecf2b9e0c9d8e2f09120ad3080c6ec3f5db048820: Status 404 returned error can't find the container with id e71d8644296e87b880874b0ecf2b9e0c9d8e2f09120ad3080c6ec3f5db048820 Feb 21 00:17:57 crc kubenswrapper[4730]: I0221 00:17:57.609449 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"a39cc57b-1bd0-4926-8740-ff2d4bc34057","Type":"ContainerStarted","Data":"e71d8644296e87b880874b0ecf2b9e0c9d8e2f09120ad3080c6ec3f5db048820"} Feb 21 00:17:58 crc kubenswrapper[4730]: I0221 00:17:58.842906 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="9cde0e02-5b39-4293-97db-651eb9f0e2aa" containerName="elasticsearch" probeResult="failure" output=< Feb 21 00:17:58 crc kubenswrapper[4730]: {"timestamp": "2026-02-21T00:17:58+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 21 00:17:58 crc kubenswrapper[4730]: > Feb 21 00:18:02 crc kubenswrapper[4730]: I0221 00:18:02.642837 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"a39cc57b-1bd0-4926-8740-ff2d4bc34057","Type":"ContainerStarted","Data":"8fd8650373bc81c13ae39d3f0142a543cce0f446faf6a8f31ea804b31c18f028"} Feb 21 00:18:02 crc kubenswrapper[4730]: E0221 00:18:02.702873 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=4186130146597151669, SKID=, AKID=1A:B3:AC:79:DB:0C:0E:96:F7:CB:3B:DC:15:5F:B1:75:BF:53:EC:A7 failed: x509: certificate signed by unknown authority" Feb 21 00:18:03 crc kubenswrapper[4730]: I0221 00:18:03.731451 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 21 00:18:04 crc kubenswrapper[4730]: I0221 00:18:04.240370 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 21 00:18:04 crc kubenswrapper[4730]: I0221 00:18:04.653254 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-1-build" podUID="a39cc57b-1bd0-4926-8740-ff2d4bc34057" containerName="git-clone" containerID="cri-o://8fd8650373bc81c13ae39d3f0142a543cce0f446faf6a8f31ea804b31c18f028" gracePeriod=30 Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.024199 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_a39cc57b-1bd0-4926-8740-ff2d4bc34057/git-clone/0.log" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.024291 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.091708 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-container-storage-run\") pod \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.091788 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a39cc57b-1bd0-4926-8740-ff2d4bc34057-buildcachedir\") pod \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.091824 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-buildworkdir\") pod \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.091866 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-proxy-ca-bundles\") pod \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.091992 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a39cc57b-1bd0-4926-8740-ff2d4bc34057-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a39cc57b-1bd0-4926-8740-ff2d4bc34057" (UID: "a39cc57b-1bd0-4926-8740-ff2d4bc34057"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092030 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092075 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-builder-dockercfg-sm5t8-push\") pod \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092109 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a39cc57b-1bd0-4926-8740-ff2d4bc34057-node-pullsecrets\") pod \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092136 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-container-storage-root\") pod \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092158 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-ca-bundles\") pod \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092202 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td67s\" (UniqueName: \"kubernetes.io/projected/a39cc57b-1bd0-4926-8740-ff2d4bc34057-kube-api-access-td67s\") pod \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092223 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-system-configs\") pod \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092242 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a39cc57b-1bd0-4926-8740-ff2d4bc34057" (UID: "a39cc57b-1bd0-4926-8740-ff2d4bc34057"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092263 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-blob-cache\") pod \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092340 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a39cc57b-1bd0-4926-8740-ff2d4bc34057" (UID: "a39cc57b-1bd0-4926-8740-ff2d4bc34057"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092417 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-builder-dockercfg-sm5t8-pull\") pod \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\" (UID: \"a39cc57b-1bd0-4926-8740-ff2d4bc34057\") " Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092530 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a39cc57b-1bd0-4926-8740-ff2d4bc34057" (UID: "a39cc57b-1bd0-4926-8740-ff2d4bc34057"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092547 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a39cc57b-1bd0-4926-8740-ff2d4bc34057" (UID: "a39cc57b-1bd0-4926-8740-ff2d4bc34057"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092560 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a39cc57b-1bd0-4926-8740-ff2d4bc34057" (UID: "a39cc57b-1bd0-4926-8740-ff2d4bc34057"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092844 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a39cc57b-1bd0-4926-8740-ff2d4bc34057" (UID: "a39cc57b-1bd0-4926-8740-ff2d4bc34057"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092854 4730 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092902 4730 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a39cc57b-1bd0-4926-8740-ff2d4bc34057-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092928 4730 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092953 4730 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092964 4730 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092973 4730 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.092999 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a39cc57b-1bd0-4926-8740-ff2d4bc34057-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a39cc57b-1bd0-4926-8740-ff2d4bc34057" (UID: "a39cc57b-1bd0-4926-8740-ff2d4bc34057"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.093371 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a39cc57b-1bd0-4926-8740-ff2d4bc34057" (UID: "a39cc57b-1bd0-4926-8740-ff2d4bc34057"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.097851 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-builder-dockercfg-sm5t8-pull" (OuterVolumeSpecName: "builder-dockercfg-sm5t8-pull") pod "a39cc57b-1bd0-4926-8740-ff2d4bc34057" (UID: "a39cc57b-1bd0-4926-8740-ff2d4bc34057"). InnerVolumeSpecName "builder-dockercfg-sm5t8-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.097970 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a39cc57b-1bd0-4926-8740-ff2d4bc34057-kube-api-access-td67s" (OuterVolumeSpecName: "kube-api-access-td67s") pod "a39cc57b-1bd0-4926-8740-ff2d4bc34057" (UID: "a39cc57b-1bd0-4926-8740-ff2d4bc34057"). InnerVolumeSpecName "kube-api-access-td67s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.099079 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-builder-dockercfg-sm5t8-push" (OuterVolumeSpecName: "builder-dockercfg-sm5t8-push") pod "a39cc57b-1bd0-4926-8740-ff2d4bc34057" (UID: "a39cc57b-1bd0-4926-8740-ff2d4bc34057"). InnerVolumeSpecName "builder-dockercfg-sm5t8-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.103222 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "a39cc57b-1bd0-4926-8740-ff2d4bc34057" (UID: "a39cc57b-1bd0-4926-8740-ff2d4bc34057"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.193982 4730 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.194016 4730 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-builder-dockercfg-sm5t8-push\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.194027 4730 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a39cc57b-1bd0-4926-8740-ff2d4bc34057-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.194035 4730 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.194044 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td67s\" (UniqueName: \"kubernetes.io/projected/a39cc57b-1bd0-4926-8740-ff2d4bc34057-kube-api-access-td67s\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.194052 4730 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a39cc57b-1bd0-4926-8740-ff2d4bc34057-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.194061 4730 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/a39cc57b-1bd0-4926-8740-ff2d4bc34057-builder-dockercfg-sm5t8-pull\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.662438 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_a39cc57b-1bd0-4926-8740-ff2d4bc34057/git-clone/0.log" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.662492 4730 generic.go:334] "Generic (PLEG): container finished" podID="a39cc57b-1bd0-4926-8740-ff2d4bc34057" containerID="8fd8650373bc81c13ae39d3f0142a543cce0f446faf6a8f31ea804b31c18f028" exitCode=1 Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.662519 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"a39cc57b-1bd0-4926-8740-ff2d4bc34057","Type":"ContainerDied","Data":"8fd8650373bc81c13ae39d3f0142a543cce0f446faf6a8f31ea804b31c18f028"} Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.662546 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"a39cc57b-1bd0-4926-8740-ff2d4bc34057","Type":"ContainerDied","Data":"e71d8644296e87b880874b0ecf2b9e0c9d8e2f09120ad3080c6ec3f5db048820"} Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.662562 4730 scope.go:117] "RemoveContainer" containerID="8fd8650373bc81c13ae39d3f0142a543cce0f446faf6a8f31ea804b31c18f028" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.662622 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.680766 4730 scope.go:117] "RemoveContainer" containerID="8fd8650373bc81c13ae39d3f0142a543cce0f446faf6a8f31ea804b31c18f028" Feb 21 00:18:05 crc kubenswrapper[4730]: E0221 00:18:05.681272 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fd8650373bc81c13ae39d3f0142a543cce0f446faf6a8f31ea804b31c18f028\": container with ID starting with 8fd8650373bc81c13ae39d3f0142a543cce0f446faf6a8f31ea804b31c18f028 not found: ID does not exist" containerID="8fd8650373bc81c13ae39d3f0142a543cce0f446faf6a8f31ea804b31c18f028" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.681339 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fd8650373bc81c13ae39d3f0142a543cce0f446faf6a8f31ea804b31c18f028"} err="failed to get container status \"8fd8650373bc81c13ae39d3f0142a543cce0f446faf6a8f31ea804b31c18f028\": rpc error: code = NotFound desc = could not find container \"8fd8650373bc81c13ae39d3f0142a543cce0f446faf6a8f31ea804b31c18f028\": container with ID starting with 8fd8650373bc81c13ae39d3f0142a543cce0f446faf6a8f31ea804b31c18f028 not found: ID does not exist" Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.695218 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 21 00:18:05 crc kubenswrapper[4730]: I0221 00:18:05.700133 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 21 00:18:06 crc kubenswrapper[4730]: I0221 00:18:06.700632 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a39cc57b-1bd0-4926-8740-ff2d4bc34057" path="/var/lib/kubelet/pods/a39cc57b-1bd0-4926-8740-ff2d4bc34057/volumes" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.161365 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 21 00:18:15 crc kubenswrapper[4730]: E0221 00:18:15.162333 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a39cc57b-1bd0-4926-8740-ff2d4bc34057" containerName="git-clone" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.162355 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39cc57b-1bd0-4926-8740-ff2d4bc34057" containerName="git-clone" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.162542 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a39cc57b-1bd0-4926-8740-ff2d4bc34057" containerName="git-clone" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.163985 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.166850 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-sm5t8" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.167764 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-2-global-ca" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.169011 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.169048 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-2-ca" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.170075 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-2-sys-config" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.197543 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.228463 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.228543 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.228579 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.228648 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj5mj\" (UniqueName: \"kubernetes.io/projected/713a26da-b39a-4cd8-880a-97a52056cc2e-kube-api-access-kj5mj\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.228677 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-builder-dockercfg-sm5t8-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.228829 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.228885 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-builder-dockercfg-sm5t8-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.228934 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.229055 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.229086 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.229227 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/713a26da-b39a-4cd8-880a-97a52056cc2e-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.229255 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.229333 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/713a26da-b39a-4cd8-880a-97a52056cc2e-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.330788 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.330849 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-builder-dockercfg-sm5t8-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.330873 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.330902 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.330922 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.330972 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/713a26da-b39a-4cd8-880a-97a52056cc2e-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.330993 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.331026 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/713a26da-b39a-4cd8-880a-97a52056cc2e-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.331058 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.331080 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.331096 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.331112 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj5mj\" (UniqueName: \"kubernetes.io/projected/713a26da-b39a-4cd8-880a-97a52056cc2e-kube-api-access-kj5mj\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.331129 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-builder-dockercfg-sm5t8-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.331157 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/713a26da-b39a-4cd8-880a-97a52056cc2e-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.331233 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/713a26da-b39a-4cd8-880a-97a52056cc2e-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.331566 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.331670 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.331821 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.331905 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.331907 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.332336 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.332514 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.338175 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.338615 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-builder-dockercfg-sm5t8-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.341358 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-builder-dockercfg-sm5t8-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.353328 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj5mj\" (UniqueName: \"kubernetes.io/projected/713a26da-b39a-4cd8-880a-97a52056cc2e-kube-api-access-kj5mj\") pod \"service-telemetry-framework-index-2-build\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.496283 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:15 crc kubenswrapper[4730]: I0221 00:18:15.950000 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 21 00:18:16 crc kubenswrapper[4730]: I0221 00:18:16.737092 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"713a26da-b39a-4cd8-880a-97a52056cc2e","Type":"ContainerStarted","Data":"7283d86b022cdbebe71b68e91e6f9ddb6f9e23369b0c452f267e6b5393cc3c0c"} Feb 21 00:18:16 crc kubenswrapper[4730]: I0221 00:18:16.737510 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"713a26da-b39a-4cd8-880a-97a52056cc2e","Type":"ContainerStarted","Data":"9baa6b717d2a0dc2e881c4d7633531df149093c08ce2daf7ee5f1ebe4ae1c5a0"} Feb 21 00:18:16 crc kubenswrapper[4730]: E0221 00:18:16.814028 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=4186130146597151669, SKID=, AKID=1A:B3:AC:79:DB:0C:0E:96:F7:CB:3B:DC:15:5F:B1:75:BF:53:EC:A7 failed: x509: certificate signed by unknown authority" Feb 21 00:18:17 crc kubenswrapper[4730]: I0221 00:18:17.843538 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 21 00:18:18 crc kubenswrapper[4730]: I0221 00:18:18.758873 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-2-build" podUID="713a26da-b39a-4cd8-880a-97a52056cc2e" containerName="git-clone" containerID="cri-o://7283d86b022cdbebe71b68e91e6f9ddb6f9e23369b0c452f267e6b5393cc3c0c" gracePeriod=30 Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.257330 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_713a26da-b39a-4cd8-880a-97a52056cc2e/git-clone/0.log" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.257786 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.400754 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-build-blob-cache\") pod \"713a26da-b39a-4cd8-880a-97a52056cc2e\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.400875 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"713a26da-b39a-4cd8-880a-97a52056cc2e\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.400923 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/713a26da-b39a-4cd8-880a-97a52056cc2e-node-pullsecrets\") pod \"713a26da-b39a-4cd8-880a-97a52056cc2e\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.400987 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-ca-bundles\") pod \"713a26da-b39a-4cd8-880a-97a52056cc2e\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401029 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-container-storage-run\") pod \"713a26da-b39a-4cd8-880a-97a52056cc2e\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401069 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-builder-dockercfg-sm5t8-push\") pod \"713a26da-b39a-4cd8-880a-97a52056cc2e\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401064 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/713a26da-b39a-4cd8-880a-97a52056cc2e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "713a26da-b39a-4cd8-880a-97a52056cc2e" (UID: "713a26da-b39a-4cd8-880a-97a52056cc2e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401108 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-buildworkdir\") pod \"713a26da-b39a-4cd8-880a-97a52056cc2e\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401149 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/713a26da-b39a-4cd8-880a-97a52056cc2e-buildcachedir\") pod \"713a26da-b39a-4cd8-880a-97a52056cc2e\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401187 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-container-storage-root\") pod \"713a26da-b39a-4cd8-880a-97a52056cc2e\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401217 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-proxy-ca-bundles\") pod \"713a26da-b39a-4cd8-880a-97a52056cc2e\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401299 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-system-configs\") pod \"713a26da-b39a-4cd8-880a-97a52056cc2e\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401359 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj5mj\" (UniqueName: \"kubernetes.io/projected/713a26da-b39a-4cd8-880a-97a52056cc2e-kube-api-access-kj5mj\") pod \"713a26da-b39a-4cd8-880a-97a52056cc2e\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401392 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-builder-dockercfg-sm5t8-pull\") pod \"713a26da-b39a-4cd8-880a-97a52056cc2e\" (UID: \"713a26da-b39a-4cd8-880a-97a52056cc2e\") " Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401289 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "713a26da-b39a-4cd8-880a-97a52056cc2e" (UID: "713a26da-b39a-4cd8-880a-97a52056cc2e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401359 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/713a26da-b39a-4cd8-880a-97a52056cc2e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "713a26da-b39a-4cd8-880a-97a52056cc2e" (UID: "713a26da-b39a-4cd8-880a-97a52056cc2e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401409 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "713a26da-b39a-4cd8-880a-97a52056cc2e" (UID: "713a26da-b39a-4cd8-880a-97a52056cc2e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.401664 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "713a26da-b39a-4cd8-880a-97a52056cc2e" (UID: "713a26da-b39a-4cd8-880a-97a52056cc2e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.402007 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "713a26da-b39a-4cd8-880a-97a52056cc2e" (UID: "713a26da-b39a-4cd8-880a-97a52056cc2e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.402155 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "713a26da-b39a-4cd8-880a-97a52056cc2e" (UID: "713a26da-b39a-4cd8-880a-97a52056cc2e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.402461 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "713a26da-b39a-4cd8-880a-97a52056cc2e" (UID: "713a26da-b39a-4cd8-880a-97a52056cc2e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.402518 4730 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.402535 4730 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.402545 4730 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/713a26da-b39a-4cd8-880a-97a52056cc2e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.402552 4730 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.402561 4730 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.402568 4730 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/713a26da-b39a-4cd8-880a-97a52056cc2e-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.402576 4730 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.402649 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "713a26da-b39a-4cd8-880a-97a52056cc2e" (UID: "713a26da-b39a-4cd8-880a-97a52056cc2e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.410526 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-builder-dockercfg-sm5t8-push" (OuterVolumeSpecName: "builder-dockercfg-sm5t8-push") pod "713a26da-b39a-4cd8-880a-97a52056cc2e" (UID: "713a26da-b39a-4cd8-880a-97a52056cc2e"). InnerVolumeSpecName "builder-dockercfg-sm5t8-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.410551 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "713a26da-b39a-4cd8-880a-97a52056cc2e" (UID: "713a26da-b39a-4cd8-880a-97a52056cc2e"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.410578 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-builder-dockercfg-sm5t8-pull" (OuterVolumeSpecName: "builder-dockercfg-sm5t8-pull") pod "713a26da-b39a-4cd8-880a-97a52056cc2e" (UID: "713a26da-b39a-4cd8-880a-97a52056cc2e"). InnerVolumeSpecName "builder-dockercfg-sm5t8-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.410589 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/713a26da-b39a-4cd8-880a-97a52056cc2e-kube-api-access-kj5mj" (OuterVolumeSpecName: "kube-api-access-kj5mj") pod "713a26da-b39a-4cd8-880a-97a52056cc2e" (UID: "713a26da-b39a-4cd8-880a-97a52056cc2e"). InnerVolumeSpecName "kube-api-access-kj5mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.503327 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj5mj\" (UniqueName: \"kubernetes.io/projected/713a26da-b39a-4cd8-880a-97a52056cc2e-kube-api-access-kj5mj\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.503358 4730 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-builder-dockercfg-sm5t8-pull\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.503368 4730 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.503378 4730 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/713a26da-b39a-4cd8-880a-97a52056cc2e-builder-dockercfg-sm5t8-push\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.503388 4730 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/713a26da-b39a-4cd8-880a-97a52056cc2e-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.503396 4730 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/713a26da-b39a-4cd8-880a-97a52056cc2e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.768396 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_713a26da-b39a-4cd8-880a-97a52056cc2e/git-clone/0.log" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.768754 4730 generic.go:334] "Generic (PLEG): container finished" podID="713a26da-b39a-4cd8-880a-97a52056cc2e" containerID="7283d86b022cdbebe71b68e91e6f9ddb6f9e23369b0c452f267e6b5393cc3c0c" exitCode=1 Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.768865 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"713a26da-b39a-4cd8-880a-97a52056cc2e","Type":"ContainerDied","Data":"7283d86b022cdbebe71b68e91e6f9ddb6f9e23369b0c452f267e6b5393cc3c0c"} Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.768976 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"713a26da-b39a-4cd8-880a-97a52056cc2e","Type":"ContainerDied","Data":"9baa6b717d2a0dc2e881c4d7633531df149093c08ce2daf7ee5f1ebe4ae1c5a0"} Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.768937 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.769015 4730 scope.go:117] "RemoveContainer" containerID="7283d86b022cdbebe71b68e91e6f9ddb6f9e23369b0c452f267e6b5393cc3c0c" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.794846 4730 scope.go:117] "RemoveContainer" containerID="7283d86b022cdbebe71b68e91e6f9ddb6f9e23369b0c452f267e6b5393cc3c0c" Feb 21 00:18:19 crc kubenswrapper[4730]: E0221 00:18:19.795470 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7283d86b022cdbebe71b68e91e6f9ddb6f9e23369b0c452f267e6b5393cc3c0c\": container with ID starting with 7283d86b022cdbebe71b68e91e6f9ddb6f9e23369b0c452f267e6b5393cc3c0c not found: ID does not exist" containerID="7283d86b022cdbebe71b68e91e6f9ddb6f9e23369b0c452f267e6b5393cc3c0c" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.795507 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7283d86b022cdbebe71b68e91e6f9ddb6f9e23369b0c452f267e6b5393cc3c0c"} err="failed to get container status \"7283d86b022cdbebe71b68e91e6f9ddb6f9e23369b0c452f267e6b5393cc3c0c\": rpc error: code = NotFound desc = could not find container \"7283d86b022cdbebe71b68e91e6f9ddb6f9e23369b0c452f267e6b5393cc3c0c\": container with ID starting with 7283d86b022cdbebe71b68e91e6f9ddb6f9e23369b0c452f267e6b5393cc3c0c not found: ID does not exist" Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.818441 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 21 00:18:19 crc kubenswrapper[4730]: I0221 00:18:19.825729 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 21 00:18:20 crc kubenswrapper[4730]: I0221 00:18:20.705250 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="713a26da-b39a-4cd8-880a-97a52056cc2e" path="/var/lib/kubelet/pods/713a26da-b39a-4cd8-880a-97a52056cc2e/volumes" Feb 21 00:18:24 crc kubenswrapper[4730]: I0221 00:18:24.322720 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:18:24 crc kubenswrapper[4730]: I0221 00:18:24.323114 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.262295 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 21 00:18:29 crc kubenswrapper[4730]: E0221 00:18:29.263017 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713a26da-b39a-4cd8-880a-97a52056cc2e" containerName="git-clone" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.263030 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="713a26da-b39a-4cd8-880a-97a52056cc2e" containerName="git-clone" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.263134 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="713a26da-b39a-4cd8-880a-97a52056cc2e" containerName="git-clone" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.263842 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.266171 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.266501 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-3-sys-config" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.266821 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-3-global-ca" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.267819 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-3-ca" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.268187 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-sm5t8" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.292066 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.360051 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.360093 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.360117 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.360145 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lr2t\" (UniqueName: \"kubernetes.io/projected/2e157968-b43e-4c02-a4a2-b982f3630411-kube-api-access-8lr2t\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.360166 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.360188 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-builder-dockercfg-sm5t8-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.360209 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e157968-b43e-4c02-a4a2-b982f3630411-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.360228 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-builder-dockercfg-sm5t8-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.360252 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.360280 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.360294 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.360318 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e157968-b43e-4c02-a4a2-b982f3630411-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.360338 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.461788 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.461850 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.461878 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e157968-b43e-4c02-a4a2-b982f3630411-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.461907 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.461964 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.461994 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.462022 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.462055 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lr2t\" (UniqueName: \"kubernetes.io/projected/2e157968-b43e-4c02-a4a2-b982f3630411-kube-api-access-8lr2t\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.462086 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.462151 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-builder-dockercfg-sm5t8-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.462182 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e157968-b43e-4c02-a4a2-b982f3630411-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.462208 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-builder-dockercfg-sm5t8-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.462246 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.462442 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e157968-b43e-4c02-a4a2-b982f3630411-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.462575 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e157968-b43e-4c02-a4a2-b982f3630411-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.463484 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.463859 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.464018 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.464535 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.464557 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.464795 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.467424 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.472542 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-builder-dockercfg-sm5t8-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.472566 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-builder-dockercfg-sm5t8-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.480774 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lr2t\" (UniqueName: \"kubernetes.io/projected/2e157968-b43e-4c02-a4a2-b982f3630411-kube-api-access-8lr2t\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.481087 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:29 crc kubenswrapper[4730]: I0221 00:18:29.584273 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:30 crc kubenswrapper[4730]: I0221 00:18:30.041334 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 21 00:18:30 crc kubenswrapper[4730]: I0221 00:18:30.854323 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"2e157968-b43e-4c02-a4a2-b982f3630411","Type":"ContainerStarted","Data":"99e62993a6afdea355cf3873b6ad52eb5c216a4b70cebbf0473ab021664ea14d"} Feb 21 00:18:30 crc kubenswrapper[4730]: I0221 00:18:30.855016 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"2e157968-b43e-4c02-a4a2-b982f3630411","Type":"ContainerStarted","Data":"598980d49f54c8b38ac8715b8da4424a21e7e7185b5c701215353412d1886536"} Feb 21 00:18:30 crc kubenswrapper[4730]: E0221 00:18:30.942845 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=4186130146597151669, SKID=, AKID=1A:B3:AC:79:DB:0C:0E:96:F7:CB:3B:DC:15:5F:B1:75:BF:53:EC:A7 failed: x509: certificate signed by unknown authority" Feb 21 00:18:31 crc kubenswrapper[4730]: I0221 00:18:31.981716 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 21 00:18:32 crc kubenswrapper[4730]: I0221 00:18:32.872339 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-3-build" podUID="2e157968-b43e-4c02-a4a2-b982f3630411" containerName="git-clone" containerID="cri-o://99e62993a6afdea355cf3873b6ad52eb5c216a4b70cebbf0473ab021664ea14d" gracePeriod=30 Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.343670 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_2e157968-b43e-4c02-a4a2-b982f3630411/git-clone/0.log" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.343920 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.519247 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e157968-b43e-4c02-a4a2-b982f3630411-node-pullsecrets\") pod \"2e157968-b43e-4c02-a4a2-b982f3630411\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.519298 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e157968-b43e-4c02-a4a2-b982f3630411-buildcachedir\") pod \"2e157968-b43e-4c02-a4a2-b982f3630411\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.519333 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-container-storage-run\") pod \"2e157968-b43e-4c02-a4a2-b982f3630411\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.519378 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e157968-b43e-4c02-a4a2-b982f3630411-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "2e157968-b43e-4c02-a4a2-b982f3630411" (UID: "2e157968-b43e-4c02-a4a2-b982f3630411"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.519386 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-proxy-ca-bundles\") pod \"2e157968-b43e-4c02-a4a2-b982f3630411\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.519470 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-system-configs\") pod \"2e157968-b43e-4c02-a4a2-b982f3630411\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.519525 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lr2t\" (UniqueName: \"kubernetes.io/projected/2e157968-b43e-4c02-a4a2-b982f3630411-kube-api-access-8lr2t\") pod \"2e157968-b43e-4c02-a4a2-b982f3630411\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.519978 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "2e157968-b43e-4c02-a4a2-b982f3630411" (UID: "2e157968-b43e-4c02-a4a2-b982f3630411"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520020 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "2e157968-b43e-4c02-a4a2-b982f3630411" (UID: "2e157968-b43e-4c02-a4a2-b982f3630411"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520110 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-builder-dockercfg-sm5t8-push\") pod \"2e157968-b43e-4c02-a4a2-b982f3630411\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520148 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e157968-b43e-4c02-a4a2-b982f3630411-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "2e157968-b43e-4c02-a4a2-b982f3630411" (UID: "2e157968-b43e-4c02-a4a2-b982f3630411"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520165 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-ca-bundles\") pod \"2e157968-b43e-4c02-a4a2-b982f3630411\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520275 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"2e157968-b43e-4c02-a4a2-b982f3630411\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520318 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-container-storage-root\") pod \"2e157968-b43e-4c02-a4a2-b982f3630411\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520348 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-build-blob-cache\") pod \"2e157968-b43e-4c02-a4a2-b982f3630411\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520383 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-buildworkdir\") pod \"2e157968-b43e-4c02-a4a2-b982f3630411\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520408 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-builder-dockercfg-sm5t8-pull\") pod \"2e157968-b43e-4c02-a4a2-b982f3630411\" (UID: \"2e157968-b43e-4c02-a4a2-b982f3630411\") " Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520762 4730 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e157968-b43e-4c02-a4a2-b982f3630411-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520756 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "2e157968-b43e-4c02-a4a2-b982f3630411" (UID: "2e157968-b43e-4c02-a4a2-b982f3630411"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520781 4730 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e157968-b43e-4c02-a4a2-b982f3630411-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520830 4730 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520845 4730 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.520848 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "2e157968-b43e-4c02-a4a2-b982f3630411" (UID: "2e157968-b43e-4c02-a4a2-b982f3630411"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.521028 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "2e157968-b43e-4c02-a4a2-b982f3630411" (UID: "2e157968-b43e-4c02-a4a2-b982f3630411"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.521064 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "2e157968-b43e-4c02-a4a2-b982f3630411" (UID: "2e157968-b43e-4c02-a4a2-b982f3630411"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.521110 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "2e157968-b43e-4c02-a4a2-b982f3630411" (UID: "2e157968-b43e-4c02-a4a2-b982f3630411"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.527804 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-builder-dockercfg-sm5t8-pull" (OuterVolumeSpecName: "builder-dockercfg-sm5t8-pull") pod "2e157968-b43e-4c02-a4a2-b982f3630411" (UID: "2e157968-b43e-4c02-a4a2-b982f3630411"). InnerVolumeSpecName "builder-dockercfg-sm5t8-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.528345 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "2e157968-b43e-4c02-a4a2-b982f3630411" (UID: "2e157968-b43e-4c02-a4a2-b982f3630411"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.530088 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e157968-b43e-4c02-a4a2-b982f3630411-kube-api-access-8lr2t" (OuterVolumeSpecName: "kube-api-access-8lr2t") pod "2e157968-b43e-4c02-a4a2-b982f3630411" (UID: "2e157968-b43e-4c02-a4a2-b982f3630411"). InnerVolumeSpecName "kube-api-access-8lr2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.531765 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-builder-dockercfg-sm5t8-push" (OuterVolumeSpecName: "builder-dockercfg-sm5t8-push") pod "2e157968-b43e-4c02-a4a2-b982f3630411" (UID: "2e157968-b43e-4c02-a4a2-b982f3630411"). InnerVolumeSpecName "builder-dockercfg-sm5t8-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.621847 4730 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.621881 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lr2t\" (UniqueName: \"kubernetes.io/projected/2e157968-b43e-4c02-a4a2-b982f3630411-kube-api-access-8lr2t\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.621894 4730 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-builder-dockercfg-sm5t8-push\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.621908 4730 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e157968-b43e-4c02-a4a2-b982f3630411-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.621920 4730 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.621934 4730 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.621969 4730 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.621980 4730 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e157968-b43e-4c02-a4a2-b982f3630411-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.621993 4730 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/2e157968-b43e-4c02-a4a2-b982f3630411-builder-dockercfg-sm5t8-pull\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.881344 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_2e157968-b43e-4c02-a4a2-b982f3630411/git-clone/0.log" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.881426 4730 generic.go:334] "Generic (PLEG): container finished" podID="2e157968-b43e-4c02-a4a2-b982f3630411" containerID="99e62993a6afdea355cf3873b6ad52eb5c216a4b70cebbf0473ab021664ea14d" exitCode=1 Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.881477 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"2e157968-b43e-4c02-a4a2-b982f3630411","Type":"ContainerDied","Data":"99e62993a6afdea355cf3873b6ad52eb5c216a4b70cebbf0473ab021664ea14d"} Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.881529 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"2e157968-b43e-4c02-a4a2-b982f3630411","Type":"ContainerDied","Data":"598980d49f54c8b38ac8715b8da4424a21e7e7185b5c701215353412d1886536"} Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.881558 4730 scope.go:117] "RemoveContainer" containerID="99e62993a6afdea355cf3873b6ad52eb5c216a4b70cebbf0473ab021664ea14d" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.881603 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.912896 4730 scope.go:117] "RemoveContainer" containerID="99e62993a6afdea355cf3873b6ad52eb5c216a4b70cebbf0473ab021664ea14d" Feb 21 00:18:33 crc kubenswrapper[4730]: E0221 00:18:33.913422 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99e62993a6afdea355cf3873b6ad52eb5c216a4b70cebbf0473ab021664ea14d\": container with ID starting with 99e62993a6afdea355cf3873b6ad52eb5c216a4b70cebbf0473ab021664ea14d not found: ID does not exist" containerID="99e62993a6afdea355cf3873b6ad52eb5c216a4b70cebbf0473ab021664ea14d" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.913454 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99e62993a6afdea355cf3873b6ad52eb5c216a4b70cebbf0473ab021664ea14d"} err="failed to get container status \"99e62993a6afdea355cf3873b6ad52eb5c216a4b70cebbf0473ab021664ea14d\": rpc error: code = NotFound desc = could not find container \"99e62993a6afdea355cf3873b6ad52eb5c216a4b70cebbf0473ab021664ea14d\": container with ID starting with 99e62993a6afdea355cf3873b6ad52eb5c216a4b70cebbf0473ab021664ea14d not found: ID does not exist" Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.943207 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 21 00:18:33 crc kubenswrapper[4730]: I0221 00:18:33.952181 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 21 00:18:34 crc kubenswrapper[4730]: I0221 00:18:34.700514 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e157968-b43e-4c02-a4a2-b982f3630411" path="/var/lib/kubelet/pods/2e157968-b43e-4c02-a4a2-b982f3630411/volumes" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.409993 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Feb 21 00:18:43 crc kubenswrapper[4730]: E0221 00:18:43.410673 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e157968-b43e-4c02-a4a2-b982f3630411" containerName="git-clone" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.410685 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e157968-b43e-4c02-a4a2-b982f3630411" containerName="git-clone" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.410785 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e157968-b43e-4c02-a4a2-b982f3630411" containerName="git-clone" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.411554 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.417589 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-4-ca" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.417666 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.417777 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-4-sys-config" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.418456 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-sm5t8" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.418600 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-4-global-ca" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.431077 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.469555 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.469604 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.469640 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-builder-dockercfg-sm5t8-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.469681 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.469773 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.469798 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-builder-dockercfg-sm5t8-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.469833 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.469862 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.469935 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/724fac1a-8fdf-4663-a333-8017f4bfc3cf-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.470128 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.470241 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/724fac1a-8fdf-4663-a333-8017f4bfc3cf-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.470294 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfhbh\" (UniqueName: \"kubernetes.io/projected/724fac1a-8fdf-4663-a333-8017f4bfc3cf-kube-api-access-kfhbh\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.470333 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.570779 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.570831 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.570859 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/724fac1a-8fdf-4663-a333-8017f4bfc3cf-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.570887 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.570918 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/724fac1a-8fdf-4663-a333-8017f4bfc3cf-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.570961 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfhbh\" (UniqueName: \"kubernetes.io/projected/724fac1a-8fdf-4663-a333-8017f4bfc3cf-kube-api-access-kfhbh\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.570984 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.571028 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.571050 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.571082 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-builder-dockercfg-sm5t8-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.571107 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.571156 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.571180 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-builder-dockercfg-sm5t8-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.571597 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/724fac1a-8fdf-4663-a333-8017f4bfc3cf-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.571965 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.572016 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/724fac1a-8fdf-4663-a333-8017f4bfc3cf-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.572084 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.572196 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.572896 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.573099 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.576337 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.576401 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.581935 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.581935 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-builder-dockercfg-sm5t8-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.591569 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-builder-dockercfg-sm5t8-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.619803 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfhbh\" (UniqueName: \"kubernetes.io/projected/724fac1a-8fdf-4663-a333-8017f4bfc3cf-kube-api-access-kfhbh\") pod \"service-telemetry-framework-index-4-build\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.733213 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.918927 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Feb 21 00:18:43 crc kubenswrapper[4730]: W0221 00:18:43.930066 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod724fac1a_8fdf_4663_a333_8017f4bfc3cf.slice/crio-5a1cae62166d01108ab10b2e4a556bbe16e29740c7a0c45f3bd1ca0c3ac73f21 WatchSource:0}: Error finding container 5a1cae62166d01108ab10b2e4a556bbe16e29740c7a0c45f3bd1ca0c3ac73f21: Status 404 returned error can't find the container with id 5a1cae62166d01108ab10b2e4a556bbe16e29740c7a0c45f3bd1ca0c3ac73f21 Feb 21 00:18:43 crc kubenswrapper[4730]: I0221 00:18:43.956443 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"724fac1a-8fdf-4663-a333-8017f4bfc3cf","Type":"ContainerStarted","Data":"5a1cae62166d01108ab10b2e4a556bbe16e29740c7a0c45f3bd1ca0c3ac73f21"} Feb 21 00:18:44 crc kubenswrapper[4730]: I0221 00:18:44.967669 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"724fac1a-8fdf-4663-a333-8017f4bfc3cf","Type":"ContainerStarted","Data":"3649f7e19e73760ee9e85dc5a8be303d93c1a90fb146a004fd3adbbbdb072ab4"} Feb 21 00:18:45 crc kubenswrapper[4730]: E0221 00:18:45.056326 4730 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=4186130146597151669, SKID=, AKID=1A:B3:AC:79:DB:0C:0E:96:F7:CB:3B:DC:15:5F:B1:75:BF:53:EC:A7 failed: x509: certificate signed by unknown authority" Feb 21 00:18:46 crc kubenswrapper[4730]: I0221 00:18:46.084499 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Feb 21 00:18:46 crc kubenswrapper[4730]: I0221 00:18:46.993706 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-4-build" podUID="724fac1a-8fdf-4663-a333-8017f4bfc3cf" containerName="git-clone" containerID="cri-o://3649f7e19e73760ee9e85dc5a8be303d93c1a90fb146a004fd3adbbbdb072ab4" gracePeriod=30 Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.135308 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-k5hp5"] Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.136434 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-k5hp5" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.139781 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"infrawatch-operators-dockercfg-bk952" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.152363 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-k5hp5"] Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.221861 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n92h7\" (UniqueName: \"kubernetes.io/projected/ae65d26f-dce0-4412-b2d5-52a59648ed1a-kube-api-access-n92h7\") pod \"infrawatch-operators-k5hp5\" (UID: \"ae65d26f-dce0-4412-b2d5-52a59648ed1a\") " pod="service-telemetry/infrawatch-operators-k5hp5" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.323443 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n92h7\" (UniqueName: \"kubernetes.io/projected/ae65d26f-dce0-4412-b2d5-52a59648ed1a-kube-api-access-n92h7\") pod \"infrawatch-operators-k5hp5\" (UID: \"ae65d26f-dce0-4412-b2d5-52a59648ed1a\") " pod="service-telemetry/infrawatch-operators-k5hp5" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.346962 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n92h7\" (UniqueName: \"kubernetes.io/projected/ae65d26f-dce0-4412-b2d5-52a59648ed1a-kube-api-access-n92h7\") pod \"infrawatch-operators-k5hp5\" (UID: \"ae65d26f-dce0-4412-b2d5-52a59648ed1a\") " pod="service-telemetry/infrawatch-operators-k5hp5" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.448177 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_724fac1a-8fdf-4663-a333-8017f4bfc3cf/git-clone/0.log" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.448262 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.478803 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-k5hp5" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.643446 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-ca-bundles\") pod \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.643504 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.643573 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-system-configs\") pod \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.643604 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-builder-dockercfg-sm5t8-push\") pod \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.643647 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-builder-dockercfg-sm5t8-pull\") pod \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.643688 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/724fac1a-8fdf-4663-a333-8017f4bfc3cf-node-pullsecrets\") pod \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.643738 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-container-storage-root\") pod \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.643766 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-blob-cache\") pod \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.643799 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-proxy-ca-bundles\") pod \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.643840 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-buildworkdir\") pod \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.643897 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-container-storage-run\") pod \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.643940 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/724fac1a-8fdf-4663-a333-8017f4bfc3cf-buildcachedir\") pod \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.644003 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfhbh\" (UniqueName: \"kubernetes.io/projected/724fac1a-8fdf-4663-a333-8017f4bfc3cf-kube-api-access-kfhbh\") pod \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\" (UID: \"724fac1a-8fdf-4663-a333-8017f4bfc3cf\") " Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.644299 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "724fac1a-8fdf-4663-a333-8017f4bfc3cf" (UID: "724fac1a-8fdf-4663-a333-8017f4bfc3cf"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.644596 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "724fac1a-8fdf-4663-a333-8017f4bfc3cf" (UID: "724fac1a-8fdf-4663-a333-8017f4bfc3cf"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.644639 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "724fac1a-8fdf-4663-a333-8017f4bfc3cf" (UID: "724fac1a-8fdf-4663-a333-8017f4bfc3cf"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.644902 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "724fac1a-8fdf-4663-a333-8017f4bfc3cf" (UID: "724fac1a-8fdf-4663-a333-8017f4bfc3cf"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.645243 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "724fac1a-8fdf-4663-a333-8017f4bfc3cf" (UID: "724fac1a-8fdf-4663-a333-8017f4bfc3cf"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.645499 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "724fac1a-8fdf-4663-a333-8017f4bfc3cf" (UID: "724fac1a-8fdf-4663-a333-8017f4bfc3cf"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.645521 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/724fac1a-8fdf-4663-a333-8017f4bfc3cf-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "724fac1a-8fdf-4663-a333-8017f4bfc3cf" (UID: "724fac1a-8fdf-4663-a333-8017f4bfc3cf"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.645684 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "724fac1a-8fdf-4663-a333-8017f4bfc3cf" (UID: "724fac1a-8fdf-4663-a333-8017f4bfc3cf"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.645704 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/724fac1a-8fdf-4663-a333-8017f4bfc3cf-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "724fac1a-8fdf-4663-a333-8017f4bfc3cf" (UID: "724fac1a-8fdf-4663-a333-8017f4bfc3cf"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.649034 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/724fac1a-8fdf-4663-a333-8017f4bfc3cf-kube-api-access-kfhbh" (OuterVolumeSpecName: "kube-api-access-kfhbh") pod "724fac1a-8fdf-4663-a333-8017f4bfc3cf" (UID: "724fac1a-8fdf-4663-a333-8017f4bfc3cf"). InnerVolumeSpecName "kube-api-access-kfhbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.649080 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-builder-dockercfg-sm5t8-pull" (OuterVolumeSpecName: "builder-dockercfg-sm5t8-pull") pod "724fac1a-8fdf-4663-a333-8017f4bfc3cf" (UID: "724fac1a-8fdf-4663-a333-8017f4bfc3cf"). InnerVolumeSpecName "builder-dockercfg-sm5t8-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.649140 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "724fac1a-8fdf-4663-a333-8017f4bfc3cf" (UID: "724fac1a-8fdf-4663-a333-8017f4bfc3cf"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.649414 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-builder-dockercfg-sm5t8-push" (OuterVolumeSpecName: "builder-dockercfg-sm5t8-push") pod "724fac1a-8fdf-4663-a333-8017f4bfc3cf" (UID: "724fac1a-8fdf-4663-a333-8017f4bfc3cf"). InnerVolumeSpecName "builder-dockercfg-sm5t8-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.744979 4730 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.745023 4730 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.745039 4730 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-sm5t8-push\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-builder-dockercfg-sm5t8-push\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.745055 4730 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-sm5t8-pull\" (UniqueName: \"kubernetes.io/secret/724fac1a-8fdf-4663-a333-8017f4bfc3cf-builder-dockercfg-sm5t8-pull\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.745071 4730 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/724fac1a-8fdf-4663-a333-8017f4bfc3cf-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.745084 4730 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.745095 4730 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.745106 4730 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.745118 4730 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.745130 4730 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/724fac1a-8fdf-4663-a333-8017f4bfc3cf-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.745143 4730 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/724fac1a-8fdf-4663-a333-8017f4bfc3cf-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.745154 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfhbh\" (UniqueName: \"kubernetes.io/projected/724fac1a-8fdf-4663-a333-8017f4bfc3cf-kube-api-access-kfhbh\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.745165 4730 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/724fac1a-8fdf-4663-a333-8017f4bfc3cf-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:47 crc kubenswrapper[4730]: W0221 00:18:47.963718 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae65d26f_dce0_4412_b2d5_52a59648ed1a.slice/crio-0bc5e52c41ece0ec52aab71c0b8cb49e3e74b39ccd9fb5cda14ac63651885a48 WatchSource:0}: Error finding container 0bc5e52c41ece0ec52aab71c0b8cb49e3e74b39ccd9fb5cda14ac63651885a48: Status 404 returned error can't find the container with id 0bc5e52c41ece0ec52aab71c0b8cb49e3e74b39ccd9fb5cda14ac63651885a48 Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.965136 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-k5hp5"] Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.997264 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_724fac1a-8fdf-4663-a333-8017f4bfc3cf/git-clone/0.log" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.997333 4730 generic.go:334] "Generic (PLEG): container finished" podID="724fac1a-8fdf-4663-a333-8017f4bfc3cf" containerID="3649f7e19e73760ee9e85dc5a8be303d93c1a90fb146a004fd3adbbbdb072ab4" exitCode=1 Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.997452 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"724fac1a-8fdf-4663-a333-8017f4bfc3cf","Type":"ContainerDied","Data":"3649f7e19e73760ee9e85dc5a8be303d93c1a90fb146a004fd3adbbbdb072ab4"} Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.997465 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.997491 4730 scope.go:117] "RemoveContainer" containerID="3649f7e19e73760ee9e85dc5a8be303d93c1a90fb146a004fd3adbbbdb072ab4" Feb 21 00:18:47 crc kubenswrapper[4730]: I0221 00:18:47.997480 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"724fac1a-8fdf-4663-a333-8017f4bfc3cf","Type":"ContainerDied","Data":"5a1cae62166d01108ab10b2e4a556bbe16e29740c7a0c45f3bd1ca0c3ac73f21"} Feb 21 00:18:48 crc kubenswrapper[4730]: I0221 00:18:47.999135 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-k5hp5" event={"ID":"ae65d26f-dce0-4412-b2d5-52a59648ed1a","Type":"ContainerStarted","Data":"0bc5e52c41ece0ec52aab71c0b8cb49e3e74b39ccd9fb5cda14ac63651885a48"} Feb 21 00:18:48 crc kubenswrapper[4730]: E0221 00:18:48.008309 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:18:48 crc kubenswrapper[4730]: E0221 00:18:48.008728 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n92h7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-k5hp5_service-telemetry(ae65d26f-dce0-4412-b2d5-52a59648ed1a): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:18:48 crc kubenswrapper[4730]: E0221 00:18:48.009898 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-k5hp5" podUID="ae65d26f-dce0-4412-b2d5-52a59648ed1a" Feb 21 00:18:48 crc kubenswrapper[4730]: I0221 00:18:48.018524 4730 scope.go:117] "RemoveContainer" containerID="3649f7e19e73760ee9e85dc5a8be303d93c1a90fb146a004fd3adbbbdb072ab4" Feb 21 00:18:48 crc kubenswrapper[4730]: E0221 00:18:48.018903 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3649f7e19e73760ee9e85dc5a8be303d93c1a90fb146a004fd3adbbbdb072ab4\": container with ID starting with 3649f7e19e73760ee9e85dc5a8be303d93c1a90fb146a004fd3adbbbdb072ab4 not found: ID does not exist" containerID="3649f7e19e73760ee9e85dc5a8be303d93c1a90fb146a004fd3adbbbdb072ab4" Feb 21 00:18:48 crc kubenswrapper[4730]: I0221 00:18:48.018936 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3649f7e19e73760ee9e85dc5a8be303d93c1a90fb146a004fd3adbbbdb072ab4"} err="failed to get container status \"3649f7e19e73760ee9e85dc5a8be303d93c1a90fb146a004fd3adbbbdb072ab4\": rpc error: code = NotFound desc = could not find container \"3649f7e19e73760ee9e85dc5a8be303d93c1a90fb146a004fd3adbbbdb072ab4\": container with ID starting with 3649f7e19e73760ee9e85dc5a8be303d93c1a90fb146a004fd3adbbbdb072ab4 not found: ID does not exist" Feb 21 00:18:48 crc kubenswrapper[4730]: I0221 00:18:48.037144 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Feb 21 00:18:48 crc kubenswrapper[4730]: I0221 00:18:48.040625 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Feb 21 00:18:48 crc kubenswrapper[4730]: I0221 00:18:48.700563 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="724fac1a-8fdf-4663-a333-8017f4bfc3cf" path="/var/lib/kubelet/pods/724fac1a-8fdf-4663-a333-8017f4bfc3cf/volumes" Feb 21 00:18:49 crc kubenswrapper[4730]: E0221 00:18:49.006878 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-k5hp5" podUID="ae65d26f-dce0-4412-b2d5-52a59648ed1a" Feb 21 00:18:51 crc kubenswrapper[4730]: I0221 00:18:51.309355 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-k5hp5"] Feb 21 00:18:51 crc kubenswrapper[4730]: I0221 00:18:51.663302 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-k5hp5" Feb 21 00:18:51 crc kubenswrapper[4730]: I0221 00:18:51.693752 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n92h7\" (UniqueName: \"kubernetes.io/projected/ae65d26f-dce0-4412-b2d5-52a59648ed1a-kube-api-access-n92h7\") pod \"ae65d26f-dce0-4412-b2d5-52a59648ed1a\" (UID: \"ae65d26f-dce0-4412-b2d5-52a59648ed1a\") " Feb 21 00:18:51 crc kubenswrapper[4730]: I0221 00:18:51.699759 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae65d26f-dce0-4412-b2d5-52a59648ed1a-kube-api-access-n92h7" (OuterVolumeSpecName: "kube-api-access-n92h7") pod "ae65d26f-dce0-4412-b2d5-52a59648ed1a" (UID: "ae65d26f-dce0-4412-b2d5-52a59648ed1a"). InnerVolumeSpecName "kube-api-access-n92h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:18:51 crc kubenswrapper[4730]: I0221 00:18:51.795066 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n92h7\" (UniqueName: \"kubernetes.io/projected/ae65d26f-dce0-4412-b2d5-52a59648ed1a-kube-api-access-n92h7\") on node \"crc\" DevicePath \"\"" Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.026482 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-k5hp5" event={"ID":"ae65d26f-dce0-4412-b2d5-52a59648ed1a","Type":"ContainerDied","Data":"0bc5e52c41ece0ec52aab71c0b8cb49e3e74b39ccd9fb5cda14ac63651885a48"} Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.027180 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-k5hp5" Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.079626 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-k5hp5"] Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.084872 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-k5hp5"] Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.108807 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-wffsm"] Feb 21 00:18:52 crc kubenswrapper[4730]: E0221 00:18:52.109049 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="724fac1a-8fdf-4663-a333-8017f4bfc3cf" containerName="git-clone" Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.109061 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="724fac1a-8fdf-4663-a333-8017f4bfc3cf" containerName="git-clone" Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.109157 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="724fac1a-8fdf-4663-a333-8017f4bfc3cf" containerName="git-clone" Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.109532 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-wffsm" Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.111326 4730 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"infrawatch-operators-dockercfg-bk952" Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.128684 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-wffsm"] Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.201125 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jrd9\" (UniqueName: \"kubernetes.io/projected/8e54debe-02a8-4a6a-a22a-9403776c881f-kube-api-access-4jrd9\") pod \"infrawatch-operators-wffsm\" (UID: \"8e54debe-02a8-4a6a-a22a-9403776c881f\") " pod="service-telemetry/infrawatch-operators-wffsm" Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.302908 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jrd9\" (UniqueName: \"kubernetes.io/projected/8e54debe-02a8-4a6a-a22a-9403776c881f-kube-api-access-4jrd9\") pod \"infrawatch-operators-wffsm\" (UID: \"8e54debe-02a8-4a6a-a22a-9403776c881f\") " pod="service-telemetry/infrawatch-operators-wffsm" Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.321415 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jrd9\" (UniqueName: \"kubernetes.io/projected/8e54debe-02a8-4a6a-a22a-9403776c881f-kube-api-access-4jrd9\") pod \"infrawatch-operators-wffsm\" (UID: \"8e54debe-02a8-4a6a-a22a-9403776c881f\") " pod="service-telemetry/infrawatch-operators-wffsm" Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.421891 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-wffsm" Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.707049 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae65d26f-dce0-4412-b2d5-52a59648ed1a" path="/var/lib/kubelet/pods/ae65d26f-dce0-4412-b2d5-52a59648ed1a/volumes" Feb 21 00:18:52 crc kubenswrapper[4730]: I0221 00:18:52.889677 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-wffsm"] Feb 21 00:18:52 crc kubenswrapper[4730]: E0221 00:18:52.934586 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:18:52 crc kubenswrapper[4730]: E0221 00:18:52.934778 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jrd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wffsm_service-telemetry(8e54debe-02a8-4a6a-a22a-9403776c881f): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:18:52 crc kubenswrapper[4730]: E0221 00:18:52.935992 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:18:53 crc kubenswrapper[4730]: I0221 00:18:53.053820 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-wffsm" event={"ID":"8e54debe-02a8-4a6a-a22a-9403776c881f","Type":"ContainerStarted","Data":"a19f40f27f25693b5a96fda013d8c3e9aef9f62d1f84be9a4f173d35d6f4537e"} Feb 21 00:18:53 crc kubenswrapper[4730]: E0221 00:18:53.056222 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:18:54 crc kubenswrapper[4730]: E0221 00:18:54.062057 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:18:54 crc kubenswrapper[4730]: I0221 00:18:54.322591 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:18:54 crc kubenswrapper[4730]: I0221 00:18:54.323128 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:19:06 crc kubenswrapper[4730]: E0221 00:19:06.733192 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:19:06 crc kubenswrapper[4730]: E0221 00:19:06.734135 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jrd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wffsm_service-telemetry(8e54debe-02a8-4a6a-a22a-9403776c881f): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:19:06 crc kubenswrapper[4730]: E0221 00:19:06.736457 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:19:19 crc kubenswrapper[4730]: E0221 00:19:19.697092 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:19:24 crc kubenswrapper[4730]: I0221 00:19:24.322782 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:19:24 crc kubenswrapper[4730]: I0221 00:19:24.323211 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:19:24 crc kubenswrapper[4730]: I0221 00:19:24.323275 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:19:24 crc kubenswrapper[4730]: I0221 00:19:24.324172 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e71ad00d652cbb89b58230b98021feaed326da2f441dcb2233defaa9931e5127"} pod="openshift-machine-config-operator/machine-config-daemon-plgd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 00:19:24 crc kubenswrapper[4730]: I0221 00:19:24.324272 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" containerID="cri-o://e71ad00d652cbb89b58230b98021feaed326da2f441dcb2233defaa9931e5127" gracePeriod=600 Feb 21 00:19:25 crc kubenswrapper[4730]: I0221 00:19:25.289616 4730 generic.go:334] "Generic (PLEG): container finished" podID="7622a560-9120-4202-b95a-246a806fe889" containerID="e71ad00d652cbb89b58230b98021feaed326da2f441dcb2233defaa9931e5127" exitCode=0 Feb 21 00:19:25 crc kubenswrapper[4730]: I0221 00:19:25.289647 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerDied","Data":"e71ad00d652cbb89b58230b98021feaed326da2f441dcb2233defaa9931e5127"} Feb 21 00:19:25 crc kubenswrapper[4730]: I0221 00:19:25.290189 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerStarted","Data":"9383ad3da6d328a8ded67c0993ce13eb48256c8fcdbb6c6d147064699b8b3ef2"} Feb 21 00:19:25 crc kubenswrapper[4730]: I0221 00:19:25.290208 4730 scope.go:117] "RemoveContainer" containerID="24a66c9695cdd5120edbf23d4909db11bb0c6c079f6a9c66eb1e643203703abe" Feb 21 00:19:26 crc kubenswrapper[4730]: I0221 00:19:26.131936 4730 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 21 00:19:31 crc kubenswrapper[4730]: E0221 00:19:31.737583 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:19:31 crc kubenswrapper[4730]: E0221 00:19:31.738598 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jrd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wffsm_service-telemetry(8e54debe-02a8-4a6a-a22a-9403776c881f): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:19:31 crc kubenswrapper[4730]: E0221 00:19:31.739919 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:19:46 crc kubenswrapper[4730]: E0221 00:19:46.695155 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:20:00 crc kubenswrapper[4730]: E0221 00:20:00.695786 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:20:13 crc kubenswrapper[4730]: E0221 00:20:13.742380 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:20:13 crc kubenswrapper[4730]: E0221 00:20:13.743239 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jrd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wffsm_service-telemetry(8e54debe-02a8-4a6a-a22a-9403776c881f): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:20:13 crc kubenswrapper[4730]: E0221 00:20:13.744417 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:20:27 crc kubenswrapper[4730]: E0221 00:20:27.695530 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:20:40 crc kubenswrapper[4730]: E0221 00:20:40.699601 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.318628 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gmgsv"] Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.319994 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.339903 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmgsv"] Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.390900 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92r8\" (UniqueName: \"kubernetes.io/projected/592d4d35-6c32-4914-a87a-eaefd5484350-kube-api-access-v92r8\") pod \"community-operators-gmgsv\" (UID: \"592d4d35-6c32-4914-a87a-eaefd5484350\") " pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.391280 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592d4d35-6c32-4914-a87a-eaefd5484350-catalog-content\") pod \"community-operators-gmgsv\" (UID: \"592d4d35-6c32-4914-a87a-eaefd5484350\") " pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.391408 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592d4d35-6c32-4914-a87a-eaefd5484350-utilities\") pod \"community-operators-gmgsv\" (UID: \"592d4d35-6c32-4914-a87a-eaefd5484350\") " pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.492863 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592d4d35-6c32-4914-a87a-eaefd5484350-catalog-content\") pod \"community-operators-gmgsv\" (UID: \"592d4d35-6c32-4914-a87a-eaefd5484350\") " pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.493143 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592d4d35-6c32-4914-a87a-eaefd5484350-utilities\") pod \"community-operators-gmgsv\" (UID: \"592d4d35-6c32-4914-a87a-eaefd5484350\") " pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.493245 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92r8\" (UniqueName: \"kubernetes.io/projected/592d4d35-6c32-4914-a87a-eaefd5484350-kube-api-access-v92r8\") pod \"community-operators-gmgsv\" (UID: \"592d4d35-6c32-4914-a87a-eaefd5484350\") " pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.493380 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592d4d35-6c32-4914-a87a-eaefd5484350-catalog-content\") pod \"community-operators-gmgsv\" (UID: \"592d4d35-6c32-4914-a87a-eaefd5484350\") " pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.493715 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592d4d35-6c32-4914-a87a-eaefd5484350-utilities\") pod \"community-operators-gmgsv\" (UID: \"592d4d35-6c32-4914-a87a-eaefd5484350\") " pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.513823 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92r8\" (UniqueName: \"kubernetes.io/projected/592d4d35-6c32-4914-a87a-eaefd5484350-kube-api-access-v92r8\") pod \"community-operators-gmgsv\" (UID: \"592d4d35-6c32-4914-a87a-eaefd5484350\") " pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.637993 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.867866 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmgsv"] Feb 21 00:20:49 crc kubenswrapper[4730]: I0221 00:20:49.924024 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmgsv" event={"ID":"592d4d35-6c32-4914-a87a-eaefd5484350","Type":"ContainerStarted","Data":"90c018591521698d4fbc8da366d7c8c35287e7bf12c6bf5155f617a5df0bc0d1"} Feb 21 00:20:50 crc kubenswrapper[4730]: I0221 00:20:50.933830 4730 generic.go:334] "Generic (PLEG): container finished" podID="592d4d35-6c32-4914-a87a-eaefd5484350" containerID="1c6bb2b019e9253a2ecec2700deeb8641c95b40a59c4204e95df82cbde9f7345" exitCode=0 Feb 21 00:20:50 crc kubenswrapper[4730]: I0221 00:20:50.933906 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmgsv" event={"ID":"592d4d35-6c32-4914-a87a-eaefd5484350","Type":"ContainerDied","Data":"1c6bb2b019e9253a2ecec2700deeb8641c95b40a59c4204e95df82cbde9f7345"} Feb 21 00:20:51 crc kubenswrapper[4730]: E0221 00:20:51.695512 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:20:51 crc kubenswrapper[4730]: I0221 00:20:51.947397 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmgsv" event={"ID":"592d4d35-6c32-4914-a87a-eaefd5484350","Type":"ContainerStarted","Data":"9b176e6b6702d31316afb9939f4964479532ede59884c31692a31fe64d7fcf71"} Feb 21 00:20:52 crc kubenswrapper[4730]: I0221 00:20:52.958578 4730 generic.go:334] "Generic (PLEG): container finished" podID="592d4d35-6c32-4914-a87a-eaefd5484350" containerID="9b176e6b6702d31316afb9939f4964479532ede59884c31692a31fe64d7fcf71" exitCode=0 Feb 21 00:20:52 crc kubenswrapper[4730]: I0221 00:20:52.958644 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmgsv" event={"ID":"592d4d35-6c32-4914-a87a-eaefd5484350","Type":"ContainerDied","Data":"9b176e6b6702d31316afb9939f4964479532ede59884c31692a31fe64d7fcf71"} Feb 21 00:20:52 crc kubenswrapper[4730]: I0221 00:20:52.958682 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmgsv" event={"ID":"592d4d35-6c32-4914-a87a-eaefd5484350","Type":"ContainerStarted","Data":"1cb1d64aa42207a08ddeb37e2e1d624ef089b389f05b44804c071f341bb3596a"} Feb 21 00:20:52 crc kubenswrapper[4730]: I0221 00:20:52.986512 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gmgsv" podStartSLOduration=2.497691868 podStartE2EDuration="3.986496103s" podCreationTimestamp="2026-02-21 00:20:49 +0000 UTC" firstStartedPulling="2026-02-21 00:20:50.936527781 +0000 UTC m=+852.948094766" lastFinishedPulling="2026-02-21 00:20:52.425332066 +0000 UTC m=+854.436899001" observedRunningTime="2026-02-21 00:20:52.985348175 +0000 UTC m=+854.996915110" watchObservedRunningTime="2026-02-21 00:20:52.986496103 +0000 UTC m=+854.998063038" Feb 21 00:20:59 crc kubenswrapper[4730]: I0221 00:20:59.638564 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:20:59 crc kubenswrapper[4730]: I0221 00:20:59.638908 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:20:59 crc kubenswrapper[4730]: I0221 00:20:59.682105 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:21:00 crc kubenswrapper[4730]: I0221 00:21:00.041099 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:21:00 crc kubenswrapper[4730]: I0221 00:21:00.082863 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmgsv"] Feb 21 00:21:02 crc kubenswrapper[4730]: I0221 00:21:02.013695 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gmgsv" podUID="592d4d35-6c32-4914-a87a-eaefd5484350" containerName="registry-server" containerID="cri-o://1cb1d64aa42207a08ddeb37e2e1d624ef089b389f05b44804c071f341bb3596a" gracePeriod=2 Feb 21 00:21:02 crc kubenswrapper[4730]: I0221 00:21:02.463450 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:21:02 crc kubenswrapper[4730]: I0221 00:21:02.602319 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592d4d35-6c32-4914-a87a-eaefd5484350-catalog-content\") pod \"592d4d35-6c32-4914-a87a-eaefd5484350\" (UID: \"592d4d35-6c32-4914-a87a-eaefd5484350\") " Feb 21 00:21:02 crc kubenswrapper[4730]: I0221 00:21:02.602414 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v92r8\" (UniqueName: \"kubernetes.io/projected/592d4d35-6c32-4914-a87a-eaefd5484350-kube-api-access-v92r8\") pod \"592d4d35-6c32-4914-a87a-eaefd5484350\" (UID: \"592d4d35-6c32-4914-a87a-eaefd5484350\") " Feb 21 00:21:02 crc kubenswrapper[4730]: I0221 00:21:02.602475 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592d4d35-6c32-4914-a87a-eaefd5484350-utilities\") pod \"592d4d35-6c32-4914-a87a-eaefd5484350\" (UID: \"592d4d35-6c32-4914-a87a-eaefd5484350\") " Feb 21 00:21:02 crc kubenswrapper[4730]: I0221 00:21:02.603518 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592d4d35-6c32-4914-a87a-eaefd5484350-utilities" (OuterVolumeSpecName: "utilities") pod "592d4d35-6c32-4914-a87a-eaefd5484350" (UID: "592d4d35-6c32-4914-a87a-eaefd5484350"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:21:02 crc kubenswrapper[4730]: I0221 00:21:02.607841 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592d4d35-6c32-4914-a87a-eaefd5484350-kube-api-access-v92r8" (OuterVolumeSpecName: "kube-api-access-v92r8") pod "592d4d35-6c32-4914-a87a-eaefd5484350" (UID: "592d4d35-6c32-4914-a87a-eaefd5484350"). InnerVolumeSpecName "kube-api-access-v92r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:21:02 crc kubenswrapper[4730]: I0221 00:21:02.646657 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592d4d35-6c32-4914-a87a-eaefd5484350-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "592d4d35-6c32-4914-a87a-eaefd5484350" (UID: "592d4d35-6c32-4914-a87a-eaefd5484350"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:21:02 crc kubenswrapper[4730]: I0221 00:21:02.704456 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592d4d35-6c32-4914-a87a-eaefd5484350-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:21:02 crc kubenswrapper[4730]: I0221 00:21:02.704517 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592d4d35-6c32-4914-a87a-eaefd5484350-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:21:02 crc kubenswrapper[4730]: I0221 00:21:02.704536 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v92r8\" (UniqueName: \"kubernetes.io/projected/592d4d35-6c32-4914-a87a-eaefd5484350-kube-api-access-v92r8\") on node \"crc\" DevicePath \"\"" Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.022288 4730 generic.go:334] "Generic (PLEG): container finished" podID="592d4d35-6c32-4914-a87a-eaefd5484350" containerID="1cb1d64aa42207a08ddeb37e2e1d624ef089b389f05b44804c071f341bb3596a" exitCode=0 Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.022342 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmgsv" Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.022339 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmgsv" event={"ID":"592d4d35-6c32-4914-a87a-eaefd5484350","Type":"ContainerDied","Data":"1cb1d64aa42207a08ddeb37e2e1d624ef089b389f05b44804c071f341bb3596a"} Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.022544 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmgsv" event={"ID":"592d4d35-6c32-4914-a87a-eaefd5484350","Type":"ContainerDied","Data":"90c018591521698d4fbc8da366d7c8c35287e7bf12c6bf5155f617a5df0bc0d1"} Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.022589 4730 scope.go:117] "RemoveContainer" containerID="1cb1d64aa42207a08ddeb37e2e1d624ef089b389f05b44804c071f341bb3596a" Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.045594 4730 scope.go:117] "RemoveContainer" containerID="9b176e6b6702d31316afb9939f4964479532ede59884c31692a31fe64d7fcf71" Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.049779 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmgsv"] Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.058173 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gmgsv"] Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.070259 4730 scope.go:117] "RemoveContainer" containerID="1c6bb2b019e9253a2ecec2700deeb8641c95b40a59c4204e95df82cbde9f7345" Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.093585 4730 scope.go:117] "RemoveContainer" containerID="1cb1d64aa42207a08ddeb37e2e1d624ef089b389f05b44804c071f341bb3596a" Feb 21 00:21:03 crc kubenswrapper[4730]: E0221 00:21:03.094308 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cb1d64aa42207a08ddeb37e2e1d624ef089b389f05b44804c071f341bb3596a\": container with ID starting with 1cb1d64aa42207a08ddeb37e2e1d624ef089b389f05b44804c071f341bb3596a not found: ID does not exist" containerID="1cb1d64aa42207a08ddeb37e2e1d624ef089b389f05b44804c071f341bb3596a" Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.094362 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cb1d64aa42207a08ddeb37e2e1d624ef089b389f05b44804c071f341bb3596a"} err="failed to get container status \"1cb1d64aa42207a08ddeb37e2e1d624ef089b389f05b44804c071f341bb3596a\": rpc error: code = NotFound desc = could not find container \"1cb1d64aa42207a08ddeb37e2e1d624ef089b389f05b44804c071f341bb3596a\": container with ID starting with 1cb1d64aa42207a08ddeb37e2e1d624ef089b389f05b44804c071f341bb3596a not found: ID does not exist" Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.094395 4730 scope.go:117] "RemoveContainer" containerID="9b176e6b6702d31316afb9939f4964479532ede59884c31692a31fe64d7fcf71" Feb 21 00:21:03 crc kubenswrapper[4730]: E0221 00:21:03.094762 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b176e6b6702d31316afb9939f4964479532ede59884c31692a31fe64d7fcf71\": container with ID starting with 9b176e6b6702d31316afb9939f4964479532ede59884c31692a31fe64d7fcf71 not found: ID does not exist" containerID="9b176e6b6702d31316afb9939f4964479532ede59884c31692a31fe64d7fcf71" Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.094802 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b176e6b6702d31316afb9939f4964479532ede59884c31692a31fe64d7fcf71"} err="failed to get container status \"9b176e6b6702d31316afb9939f4964479532ede59884c31692a31fe64d7fcf71\": rpc error: code = NotFound desc = could not find container \"9b176e6b6702d31316afb9939f4964479532ede59884c31692a31fe64d7fcf71\": container with ID starting with 9b176e6b6702d31316afb9939f4964479532ede59884c31692a31fe64d7fcf71 not found: ID does not exist" Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.094827 4730 scope.go:117] "RemoveContainer" containerID="1c6bb2b019e9253a2ecec2700deeb8641c95b40a59c4204e95df82cbde9f7345" Feb 21 00:21:03 crc kubenswrapper[4730]: E0221 00:21:03.095390 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6bb2b019e9253a2ecec2700deeb8641c95b40a59c4204e95df82cbde9f7345\": container with ID starting with 1c6bb2b019e9253a2ecec2700deeb8641c95b40a59c4204e95df82cbde9f7345 not found: ID does not exist" containerID="1c6bb2b019e9253a2ecec2700deeb8641c95b40a59c4204e95df82cbde9f7345" Feb 21 00:21:03 crc kubenswrapper[4730]: I0221 00:21:03.095464 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6bb2b019e9253a2ecec2700deeb8641c95b40a59c4204e95df82cbde9f7345"} err="failed to get container status \"1c6bb2b019e9253a2ecec2700deeb8641c95b40a59c4204e95df82cbde9f7345\": rpc error: code = NotFound desc = could not find container \"1c6bb2b019e9253a2ecec2700deeb8641c95b40a59c4204e95df82cbde9f7345\": container with ID starting with 1c6bb2b019e9253a2ecec2700deeb8641c95b40a59c4204e95df82cbde9f7345 not found: ID does not exist" Feb 21 00:21:03 crc kubenswrapper[4730]: E0221 00:21:03.695298 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:21:04 crc kubenswrapper[4730]: I0221 00:21:04.703713 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="592d4d35-6c32-4914-a87a-eaefd5484350" path="/var/lib/kubelet/pods/592d4d35-6c32-4914-a87a-eaefd5484350/volumes" Feb 21 00:21:17 crc kubenswrapper[4730]: E0221 00:21:17.695721 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:21:24 crc kubenswrapper[4730]: I0221 00:21:24.323395 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:21:24 crc kubenswrapper[4730]: I0221 00:21:24.324182 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:21:32 crc kubenswrapper[4730]: E0221 00:21:32.695490 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.163145 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lfbc5"] Feb 21 00:21:38 crc kubenswrapper[4730]: E0221 00:21:38.163714 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592d4d35-6c32-4914-a87a-eaefd5484350" containerName="extract-content" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.163730 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="592d4d35-6c32-4914-a87a-eaefd5484350" containerName="extract-content" Feb 21 00:21:38 crc kubenswrapper[4730]: E0221 00:21:38.163754 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592d4d35-6c32-4914-a87a-eaefd5484350" containerName="extract-utilities" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.163762 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="592d4d35-6c32-4914-a87a-eaefd5484350" containerName="extract-utilities" Feb 21 00:21:38 crc kubenswrapper[4730]: E0221 00:21:38.163775 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592d4d35-6c32-4914-a87a-eaefd5484350" containerName="registry-server" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.163783 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="592d4d35-6c32-4914-a87a-eaefd5484350" containerName="registry-server" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.163898 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="592d4d35-6c32-4914-a87a-eaefd5484350" containerName="registry-server" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.164906 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.181423 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lfbc5"] Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.226333 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c272eb2-667e-4c27-99ed-b71c2a6828e9-utilities\") pod \"redhat-operators-lfbc5\" (UID: \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\") " pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.226426 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c272eb2-667e-4c27-99ed-b71c2a6828e9-catalog-content\") pod \"redhat-operators-lfbc5\" (UID: \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\") " pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.226481 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s7vc\" (UniqueName: \"kubernetes.io/projected/6c272eb2-667e-4c27-99ed-b71c2a6828e9-kube-api-access-7s7vc\") pod \"redhat-operators-lfbc5\" (UID: \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\") " pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.327747 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c272eb2-667e-4c27-99ed-b71c2a6828e9-utilities\") pod \"redhat-operators-lfbc5\" (UID: \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\") " pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.327827 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c272eb2-667e-4c27-99ed-b71c2a6828e9-catalog-content\") pod \"redhat-operators-lfbc5\" (UID: \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\") " pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.327855 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s7vc\" (UniqueName: \"kubernetes.io/projected/6c272eb2-667e-4c27-99ed-b71c2a6828e9-kube-api-access-7s7vc\") pod \"redhat-operators-lfbc5\" (UID: \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\") " pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.328734 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c272eb2-667e-4c27-99ed-b71c2a6828e9-utilities\") pod \"redhat-operators-lfbc5\" (UID: \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\") " pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.329043 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c272eb2-667e-4c27-99ed-b71c2a6828e9-catalog-content\") pod \"redhat-operators-lfbc5\" (UID: \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\") " pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.348628 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s7vc\" (UniqueName: \"kubernetes.io/projected/6c272eb2-667e-4c27-99ed-b71c2a6828e9-kube-api-access-7s7vc\") pod \"redhat-operators-lfbc5\" (UID: \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\") " pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.481469 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:38 crc kubenswrapper[4730]: I0221 00:21:38.687651 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lfbc5"] Feb 21 00:21:38 crc kubenswrapper[4730]: W0221 00:21:38.704124 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c272eb2_667e_4c27_99ed_b71c2a6828e9.slice/crio-1a54ac2a694d0e941bd08a4ab69e2cd4bf66137456f98c0657cf7e39d086fe28 WatchSource:0}: Error finding container 1a54ac2a694d0e941bd08a4ab69e2cd4bf66137456f98c0657cf7e39d086fe28: Status 404 returned error can't find the container with id 1a54ac2a694d0e941bd08a4ab69e2cd4bf66137456f98c0657cf7e39d086fe28 Feb 21 00:21:39 crc kubenswrapper[4730]: I0221 00:21:39.280921 4730 generic.go:334] "Generic (PLEG): container finished" podID="6c272eb2-667e-4c27-99ed-b71c2a6828e9" containerID="661d3f6e94a7e4fb6588775e76437cfbd820e8b6a412898f63f2c16bf46a530b" exitCode=0 Feb 21 00:21:39 crc kubenswrapper[4730]: I0221 00:21:39.281009 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbc5" event={"ID":"6c272eb2-667e-4c27-99ed-b71c2a6828e9","Type":"ContainerDied","Data":"661d3f6e94a7e4fb6588775e76437cfbd820e8b6a412898f63f2c16bf46a530b"} Feb 21 00:21:39 crc kubenswrapper[4730]: I0221 00:21:39.281035 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbc5" event={"ID":"6c272eb2-667e-4c27-99ed-b71c2a6828e9","Type":"ContainerStarted","Data":"1a54ac2a694d0e941bd08a4ab69e2cd4bf66137456f98c0657cf7e39d086fe28"} Feb 21 00:21:39 crc kubenswrapper[4730]: I0221 00:21:39.283442 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 00:21:40 crc kubenswrapper[4730]: I0221 00:21:40.291089 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbc5" event={"ID":"6c272eb2-667e-4c27-99ed-b71c2a6828e9","Type":"ContainerStarted","Data":"85be1bca55eaf1976c5afad68a8ad0768222b90eeef2b09e4bde66f6ac508a7b"} Feb 21 00:21:41 crc kubenswrapper[4730]: I0221 00:21:41.299558 4730 generic.go:334] "Generic (PLEG): container finished" podID="6c272eb2-667e-4c27-99ed-b71c2a6828e9" containerID="85be1bca55eaf1976c5afad68a8ad0768222b90eeef2b09e4bde66f6ac508a7b" exitCode=0 Feb 21 00:21:41 crc kubenswrapper[4730]: I0221 00:21:41.299591 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbc5" event={"ID":"6c272eb2-667e-4c27-99ed-b71c2a6828e9","Type":"ContainerDied","Data":"85be1bca55eaf1976c5afad68a8ad0768222b90eeef2b09e4bde66f6ac508a7b"} Feb 21 00:21:42 crc kubenswrapper[4730]: I0221 00:21:42.308174 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbc5" event={"ID":"6c272eb2-667e-4c27-99ed-b71c2a6828e9","Type":"ContainerStarted","Data":"78d63efc10d1a488207b79f355f80211e231f5663a09c7bcde4d524e2d8c9535"} Feb 21 00:21:42 crc kubenswrapper[4730]: I0221 00:21:42.331485 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lfbc5" podStartSLOduration=1.935940724 podStartE2EDuration="4.331468383s" podCreationTimestamp="2026-02-21 00:21:38 +0000 UTC" firstStartedPulling="2026-02-21 00:21:39.283153988 +0000 UTC m=+901.294720923" lastFinishedPulling="2026-02-21 00:21:41.678681617 +0000 UTC m=+903.690248582" observedRunningTime="2026-02-21 00:21:42.328018548 +0000 UTC m=+904.339585503" watchObservedRunningTime="2026-02-21 00:21:42.331468383 +0000 UTC m=+904.343035318" Feb 21 00:21:46 crc kubenswrapper[4730]: E0221 00:21:46.726417 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:21:46 crc kubenswrapper[4730]: E0221 00:21:46.727124 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jrd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wffsm_service-telemetry(8e54debe-02a8-4a6a-a22a-9403776c881f): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:21:46 crc kubenswrapper[4730]: E0221 00:21:46.728426 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:21:48 crc kubenswrapper[4730]: I0221 00:21:48.482576 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:48 crc kubenswrapper[4730]: I0221 00:21:48.482682 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:48 crc kubenswrapper[4730]: I0221 00:21:48.545409 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:49 crc kubenswrapper[4730]: I0221 00:21:49.395988 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:49 crc kubenswrapper[4730]: I0221 00:21:49.455667 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lfbc5"] Feb 21 00:21:51 crc kubenswrapper[4730]: I0221 00:21:51.363973 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lfbc5" podUID="6c272eb2-667e-4c27-99ed-b71c2a6828e9" containerName="registry-server" containerID="cri-o://78d63efc10d1a488207b79f355f80211e231f5663a09c7bcde4d524e2d8c9535" gracePeriod=2 Feb 21 00:21:52 crc kubenswrapper[4730]: I0221 00:21:52.885427 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.062963 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c272eb2-667e-4c27-99ed-b71c2a6828e9-utilities\") pod \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\" (UID: \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\") " Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.063137 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s7vc\" (UniqueName: \"kubernetes.io/projected/6c272eb2-667e-4c27-99ed-b71c2a6828e9-kube-api-access-7s7vc\") pod \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\" (UID: \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\") " Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.063181 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c272eb2-667e-4c27-99ed-b71c2a6828e9-catalog-content\") pod \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\" (UID: \"6c272eb2-667e-4c27-99ed-b71c2a6828e9\") " Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.064174 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c272eb2-667e-4c27-99ed-b71c2a6828e9-utilities" (OuterVolumeSpecName: "utilities") pod "6c272eb2-667e-4c27-99ed-b71c2a6828e9" (UID: "6c272eb2-667e-4c27-99ed-b71c2a6828e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.072864 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c272eb2-667e-4c27-99ed-b71c2a6828e9-kube-api-access-7s7vc" (OuterVolumeSpecName: "kube-api-access-7s7vc") pod "6c272eb2-667e-4c27-99ed-b71c2a6828e9" (UID: "6c272eb2-667e-4c27-99ed-b71c2a6828e9"). InnerVolumeSpecName "kube-api-access-7s7vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.164726 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c272eb2-667e-4c27-99ed-b71c2a6828e9-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.164759 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s7vc\" (UniqueName: \"kubernetes.io/projected/6c272eb2-667e-4c27-99ed-b71c2a6828e9-kube-api-access-7s7vc\") on node \"crc\" DevicePath \"\"" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.190362 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c272eb2-667e-4c27-99ed-b71c2a6828e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c272eb2-667e-4c27-99ed-b71c2a6828e9" (UID: "6c272eb2-667e-4c27-99ed-b71c2a6828e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.266273 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c272eb2-667e-4c27-99ed-b71c2a6828e9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.376838 4730 generic.go:334] "Generic (PLEG): container finished" podID="6c272eb2-667e-4c27-99ed-b71c2a6828e9" containerID="78d63efc10d1a488207b79f355f80211e231f5663a09c7bcde4d524e2d8c9535" exitCode=0 Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.376923 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbc5" event={"ID":"6c272eb2-667e-4c27-99ed-b71c2a6828e9","Type":"ContainerDied","Data":"78d63efc10d1a488207b79f355f80211e231f5663a09c7bcde4d524e2d8c9535"} Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.377036 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbc5" event={"ID":"6c272eb2-667e-4c27-99ed-b71c2a6828e9","Type":"ContainerDied","Data":"1a54ac2a694d0e941bd08a4ab69e2cd4bf66137456f98c0657cf7e39d086fe28"} Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.377074 4730 scope.go:117] "RemoveContainer" containerID="78d63efc10d1a488207b79f355f80211e231f5663a09c7bcde4d524e2d8c9535" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.377582 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfbc5" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.390813 4730 scope.go:117] "RemoveContainer" containerID="85be1bca55eaf1976c5afad68a8ad0768222b90eeef2b09e4bde66f6ac508a7b" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.407782 4730 scope.go:117] "RemoveContainer" containerID="661d3f6e94a7e4fb6588775e76437cfbd820e8b6a412898f63f2c16bf46a530b" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.431700 4730 scope.go:117] "RemoveContainer" containerID="78d63efc10d1a488207b79f355f80211e231f5663a09c7bcde4d524e2d8c9535" Feb 21 00:21:53 crc kubenswrapper[4730]: E0221 00:21:53.432392 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78d63efc10d1a488207b79f355f80211e231f5663a09c7bcde4d524e2d8c9535\": container with ID starting with 78d63efc10d1a488207b79f355f80211e231f5663a09c7bcde4d524e2d8c9535 not found: ID does not exist" containerID="78d63efc10d1a488207b79f355f80211e231f5663a09c7bcde4d524e2d8c9535" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.432438 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d63efc10d1a488207b79f355f80211e231f5663a09c7bcde4d524e2d8c9535"} err="failed to get container status \"78d63efc10d1a488207b79f355f80211e231f5663a09c7bcde4d524e2d8c9535\": rpc error: code = NotFound desc = could not find container \"78d63efc10d1a488207b79f355f80211e231f5663a09c7bcde4d524e2d8c9535\": container with ID starting with 78d63efc10d1a488207b79f355f80211e231f5663a09c7bcde4d524e2d8c9535 not found: ID does not exist" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.432466 4730 scope.go:117] "RemoveContainer" containerID="85be1bca55eaf1976c5afad68a8ad0768222b90eeef2b09e4bde66f6ac508a7b" Feb 21 00:21:53 crc kubenswrapper[4730]: E0221 00:21:53.433521 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85be1bca55eaf1976c5afad68a8ad0768222b90eeef2b09e4bde66f6ac508a7b\": container with ID starting with 85be1bca55eaf1976c5afad68a8ad0768222b90eeef2b09e4bde66f6ac508a7b not found: ID does not exist" containerID="85be1bca55eaf1976c5afad68a8ad0768222b90eeef2b09e4bde66f6ac508a7b" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.433550 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85be1bca55eaf1976c5afad68a8ad0768222b90eeef2b09e4bde66f6ac508a7b"} err="failed to get container status \"85be1bca55eaf1976c5afad68a8ad0768222b90eeef2b09e4bde66f6ac508a7b\": rpc error: code = NotFound desc = could not find container \"85be1bca55eaf1976c5afad68a8ad0768222b90eeef2b09e4bde66f6ac508a7b\": container with ID starting with 85be1bca55eaf1976c5afad68a8ad0768222b90eeef2b09e4bde66f6ac508a7b not found: ID does not exist" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.433591 4730 scope.go:117] "RemoveContainer" containerID="661d3f6e94a7e4fb6588775e76437cfbd820e8b6a412898f63f2c16bf46a530b" Feb 21 00:21:53 crc kubenswrapper[4730]: E0221 00:21:53.434013 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"661d3f6e94a7e4fb6588775e76437cfbd820e8b6a412898f63f2c16bf46a530b\": container with ID starting with 661d3f6e94a7e4fb6588775e76437cfbd820e8b6a412898f63f2c16bf46a530b not found: ID does not exist" containerID="661d3f6e94a7e4fb6588775e76437cfbd820e8b6a412898f63f2c16bf46a530b" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.434126 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"661d3f6e94a7e4fb6588775e76437cfbd820e8b6a412898f63f2c16bf46a530b"} err="failed to get container status \"661d3f6e94a7e4fb6588775e76437cfbd820e8b6a412898f63f2c16bf46a530b\": rpc error: code = NotFound desc = could not find container \"661d3f6e94a7e4fb6588775e76437cfbd820e8b6a412898f63f2c16bf46a530b\": container with ID starting with 661d3f6e94a7e4fb6588775e76437cfbd820e8b6a412898f63f2c16bf46a530b not found: ID does not exist" Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.448397 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lfbc5"] Feb 21 00:21:53 crc kubenswrapper[4730]: I0221 00:21:53.454204 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lfbc5"] Feb 21 00:21:54 crc kubenswrapper[4730]: I0221 00:21:54.322250 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:21:54 crc kubenswrapper[4730]: I0221 00:21:54.322299 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:21:54 crc kubenswrapper[4730]: I0221 00:21:54.703643 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c272eb2-667e-4c27-99ed-b71c2a6828e9" path="/var/lib/kubelet/pods/6c272eb2-667e-4c27-99ed-b71c2a6828e9/volumes" Feb 21 00:21:59 crc kubenswrapper[4730]: E0221 00:21:59.697239 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.394750 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bhc6v"] Feb 21 00:22:00 crc kubenswrapper[4730]: E0221 00:22:00.395049 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c272eb2-667e-4c27-99ed-b71c2a6828e9" containerName="extract-utilities" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.395066 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c272eb2-667e-4c27-99ed-b71c2a6828e9" containerName="extract-utilities" Feb 21 00:22:00 crc kubenswrapper[4730]: E0221 00:22:00.395078 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c272eb2-667e-4c27-99ed-b71c2a6828e9" containerName="registry-server" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.395085 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c272eb2-667e-4c27-99ed-b71c2a6828e9" containerName="registry-server" Feb 21 00:22:00 crc kubenswrapper[4730]: E0221 00:22:00.395099 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c272eb2-667e-4c27-99ed-b71c2a6828e9" containerName="extract-content" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.395106 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c272eb2-667e-4c27-99ed-b71c2a6828e9" containerName="extract-content" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.395235 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c272eb2-667e-4c27-99ed-b71c2a6828e9" containerName="registry-server" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.396203 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.410925 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhc6v"] Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.457918 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e71de9a-c117-4fee-990d-fba271f44d6a-utilities\") pod \"certified-operators-bhc6v\" (UID: \"4e71de9a-c117-4fee-990d-fba271f44d6a\") " pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.457987 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn92f\" (UniqueName: \"kubernetes.io/projected/4e71de9a-c117-4fee-990d-fba271f44d6a-kube-api-access-xn92f\") pod \"certified-operators-bhc6v\" (UID: \"4e71de9a-c117-4fee-990d-fba271f44d6a\") " pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.458008 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e71de9a-c117-4fee-990d-fba271f44d6a-catalog-content\") pod \"certified-operators-bhc6v\" (UID: \"4e71de9a-c117-4fee-990d-fba271f44d6a\") " pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.559160 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e71de9a-c117-4fee-990d-fba271f44d6a-utilities\") pod \"certified-operators-bhc6v\" (UID: \"4e71de9a-c117-4fee-990d-fba271f44d6a\") " pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.559265 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn92f\" (UniqueName: \"kubernetes.io/projected/4e71de9a-c117-4fee-990d-fba271f44d6a-kube-api-access-xn92f\") pod \"certified-operators-bhc6v\" (UID: \"4e71de9a-c117-4fee-990d-fba271f44d6a\") " pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.559289 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e71de9a-c117-4fee-990d-fba271f44d6a-catalog-content\") pod \"certified-operators-bhc6v\" (UID: \"4e71de9a-c117-4fee-990d-fba271f44d6a\") " pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.559624 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e71de9a-c117-4fee-990d-fba271f44d6a-utilities\") pod \"certified-operators-bhc6v\" (UID: \"4e71de9a-c117-4fee-990d-fba271f44d6a\") " pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.559783 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e71de9a-c117-4fee-990d-fba271f44d6a-catalog-content\") pod \"certified-operators-bhc6v\" (UID: \"4e71de9a-c117-4fee-990d-fba271f44d6a\") " pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.578071 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn92f\" (UniqueName: \"kubernetes.io/projected/4e71de9a-c117-4fee-990d-fba271f44d6a-kube-api-access-xn92f\") pod \"certified-operators-bhc6v\" (UID: \"4e71de9a-c117-4fee-990d-fba271f44d6a\") " pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.713193 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:00 crc kubenswrapper[4730]: I0221 00:22:00.953501 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhc6v"] Feb 21 00:22:01 crc kubenswrapper[4730]: I0221 00:22:01.432321 4730 generic.go:334] "Generic (PLEG): container finished" podID="4e71de9a-c117-4fee-990d-fba271f44d6a" containerID="ce8670efa17b1f37f8d47e1d9af9be01a89c169b23ab48d950ca8018fccd5c01" exitCode=0 Feb 21 00:22:01 crc kubenswrapper[4730]: I0221 00:22:01.432371 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhc6v" event={"ID":"4e71de9a-c117-4fee-990d-fba271f44d6a","Type":"ContainerDied","Data":"ce8670efa17b1f37f8d47e1d9af9be01a89c169b23ab48d950ca8018fccd5c01"} Feb 21 00:22:01 crc kubenswrapper[4730]: I0221 00:22:01.432400 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhc6v" event={"ID":"4e71de9a-c117-4fee-990d-fba271f44d6a","Type":"ContainerStarted","Data":"05776454f0dcf65f46a59247e6beb6c599196c817cf69bbebb8aaac768639321"} Feb 21 00:22:03 crc kubenswrapper[4730]: I0221 00:22:03.451062 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhc6v" event={"ID":"4e71de9a-c117-4fee-990d-fba271f44d6a","Type":"ContainerStarted","Data":"7a07512ceca0ce7e23a775e8857f5805ad05f4cdc4bd8da8a643449029ad1cd0"} Feb 21 00:22:04 crc kubenswrapper[4730]: I0221 00:22:04.459783 4730 generic.go:334] "Generic (PLEG): container finished" podID="4e71de9a-c117-4fee-990d-fba271f44d6a" containerID="7a07512ceca0ce7e23a775e8857f5805ad05f4cdc4bd8da8a643449029ad1cd0" exitCode=0 Feb 21 00:22:04 crc kubenswrapper[4730]: I0221 00:22:04.460910 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhc6v" event={"ID":"4e71de9a-c117-4fee-990d-fba271f44d6a","Type":"ContainerDied","Data":"7a07512ceca0ce7e23a775e8857f5805ad05f4cdc4bd8da8a643449029ad1cd0"} Feb 21 00:22:05 crc kubenswrapper[4730]: I0221 00:22:05.468900 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhc6v" event={"ID":"4e71de9a-c117-4fee-990d-fba271f44d6a","Type":"ContainerStarted","Data":"de92fa74fbc4cde3b9da75db5a058959983122c629e25035fa9a780e1eb64a89"} Feb 21 00:22:05 crc kubenswrapper[4730]: I0221 00:22:05.487869 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bhc6v" podStartSLOduration=1.976524195 podStartE2EDuration="5.487848325s" podCreationTimestamp="2026-02-21 00:22:00 +0000 UTC" firstStartedPulling="2026-02-21 00:22:01.433658928 +0000 UTC m=+923.445225863" lastFinishedPulling="2026-02-21 00:22:04.944983058 +0000 UTC m=+926.956549993" observedRunningTime="2026-02-21 00:22:05.484599847 +0000 UTC m=+927.496166792" watchObservedRunningTime="2026-02-21 00:22:05.487848325 +0000 UTC m=+927.499415270" Feb 21 00:22:10 crc kubenswrapper[4730]: I0221 00:22:10.713587 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:10 crc kubenswrapper[4730]: I0221 00:22:10.713929 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:10 crc kubenswrapper[4730]: I0221 00:22:10.771579 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:11 crc kubenswrapper[4730]: I0221 00:22:11.561160 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:11 crc kubenswrapper[4730]: I0221 00:22:11.605104 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhc6v"] Feb 21 00:22:13 crc kubenswrapper[4730]: I0221 00:22:13.522389 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bhc6v" podUID="4e71de9a-c117-4fee-990d-fba271f44d6a" containerName="registry-server" containerID="cri-o://de92fa74fbc4cde3b9da75db5a058959983122c629e25035fa9a780e1eb64a89" gracePeriod=2 Feb 21 00:22:13 crc kubenswrapper[4730]: I0221 00:22:13.899508 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.045543 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e71de9a-c117-4fee-990d-fba271f44d6a-catalog-content\") pod \"4e71de9a-c117-4fee-990d-fba271f44d6a\" (UID: \"4e71de9a-c117-4fee-990d-fba271f44d6a\") " Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.045715 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn92f\" (UniqueName: \"kubernetes.io/projected/4e71de9a-c117-4fee-990d-fba271f44d6a-kube-api-access-xn92f\") pod \"4e71de9a-c117-4fee-990d-fba271f44d6a\" (UID: \"4e71de9a-c117-4fee-990d-fba271f44d6a\") " Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.045778 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e71de9a-c117-4fee-990d-fba271f44d6a-utilities\") pod \"4e71de9a-c117-4fee-990d-fba271f44d6a\" (UID: \"4e71de9a-c117-4fee-990d-fba271f44d6a\") " Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.046558 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e71de9a-c117-4fee-990d-fba271f44d6a-utilities" (OuterVolumeSpecName: "utilities") pod "4e71de9a-c117-4fee-990d-fba271f44d6a" (UID: "4e71de9a-c117-4fee-990d-fba271f44d6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.051081 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e71de9a-c117-4fee-990d-fba271f44d6a-kube-api-access-xn92f" (OuterVolumeSpecName: "kube-api-access-xn92f") pod "4e71de9a-c117-4fee-990d-fba271f44d6a" (UID: "4e71de9a-c117-4fee-990d-fba271f44d6a"). InnerVolumeSpecName "kube-api-access-xn92f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.147457 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e71de9a-c117-4fee-990d-fba271f44d6a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.147487 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn92f\" (UniqueName: \"kubernetes.io/projected/4e71de9a-c117-4fee-990d-fba271f44d6a-kube-api-access-xn92f\") on node \"crc\" DevicePath \"\"" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.531334 4730 generic.go:334] "Generic (PLEG): container finished" podID="4e71de9a-c117-4fee-990d-fba271f44d6a" containerID="de92fa74fbc4cde3b9da75db5a058959983122c629e25035fa9a780e1eb64a89" exitCode=0 Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.531382 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhc6v" event={"ID":"4e71de9a-c117-4fee-990d-fba271f44d6a","Type":"ContainerDied","Data":"de92fa74fbc4cde3b9da75db5a058959983122c629e25035fa9a780e1eb64a89"} Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.531407 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhc6v" event={"ID":"4e71de9a-c117-4fee-990d-fba271f44d6a","Type":"ContainerDied","Data":"05776454f0dcf65f46a59247e6beb6c599196c817cf69bbebb8aaac768639321"} Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.531414 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhc6v" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.531422 4730 scope.go:117] "RemoveContainer" containerID="de92fa74fbc4cde3b9da75db5a058959983122c629e25035fa9a780e1eb64a89" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.552889 4730 scope.go:117] "RemoveContainer" containerID="7a07512ceca0ce7e23a775e8857f5805ad05f4cdc4bd8da8a643449029ad1cd0" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.577392 4730 scope.go:117] "RemoveContainer" containerID="ce8670efa17b1f37f8d47e1d9af9be01a89c169b23ab48d950ca8018fccd5c01" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.606395 4730 scope.go:117] "RemoveContainer" containerID="de92fa74fbc4cde3b9da75db5a058959983122c629e25035fa9a780e1eb64a89" Feb 21 00:22:14 crc kubenswrapper[4730]: E0221 00:22:14.607059 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de92fa74fbc4cde3b9da75db5a058959983122c629e25035fa9a780e1eb64a89\": container with ID starting with de92fa74fbc4cde3b9da75db5a058959983122c629e25035fa9a780e1eb64a89 not found: ID does not exist" containerID="de92fa74fbc4cde3b9da75db5a058959983122c629e25035fa9a780e1eb64a89" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.607145 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de92fa74fbc4cde3b9da75db5a058959983122c629e25035fa9a780e1eb64a89"} err="failed to get container status \"de92fa74fbc4cde3b9da75db5a058959983122c629e25035fa9a780e1eb64a89\": rpc error: code = NotFound desc = could not find container \"de92fa74fbc4cde3b9da75db5a058959983122c629e25035fa9a780e1eb64a89\": container with ID starting with de92fa74fbc4cde3b9da75db5a058959983122c629e25035fa9a780e1eb64a89 not found: ID does not exist" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.607191 4730 scope.go:117] "RemoveContainer" containerID="7a07512ceca0ce7e23a775e8857f5805ad05f4cdc4bd8da8a643449029ad1cd0" Feb 21 00:22:14 crc kubenswrapper[4730]: E0221 00:22:14.609720 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a07512ceca0ce7e23a775e8857f5805ad05f4cdc4bd8da8a643449029ad1cd0\": container with ID starting with 7a07512ceca0ce7e23a775e8857f5805ad05f4cdc4bd8da8a643449029ad1cd0 not found: ID does not exist" containerID="7a07512ceca0ce7e23a775e8857f5805ad05f4cdc4bd8da8a643449029ad1cd0" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.609764 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a07512ceca0ce7e23a775e8857f5805ad05f4cdc4bd8da8a643449029ad1cd0"} err="failed to get container status \"7a07512ceca0ce7e23a775e8857f5805ad05f4cdc4bd8da8a643449029ad1cd0\": rpc error: code = NotFound desc = could not find container \"7a07512ceca0ce7e23a775e8857f5805ad05f4cdc4bd8da8a643449029ad1cd0\": container with ID starting with 7a07512ceca0ce7e23a775e8857f5805ad05f4cdc4bd8da8a643449029ad1cd0 not found: ID does not exist" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.609796 4730 scope.go:117] "RemoveContainer" containerID="ce8670efa17b1f37f8d47e1d9af9be01a89c169b23ab48d950ca8018fccd5c01" Feb 21 00:22:14 crc kubenswrapper[4730]: E0221 00:22:14.610451 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce8670efa17b1f37f8d47e1d9af9be01a89c169b23ab48d950ca8018fccd5c01\": container with ID starting with ce8670efa17b1f37f8d47e1d9af9be01a89c169b23ab48d950ca8018fccd5c01 not found: ID does not exist" containerID="ce8670efa17b1f37f8d47e1d9af9be01a89c169b23ab48d950ca8018fccd5c01" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.610523 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce8670efa17b1f37f8d47e1d9af9be01a89c169b23ab48d950ca8018fccd5c01"} err="failed to get container status \"ce8670efa17b1f37f8d47e1d9af9be01a89c169b23ab48d950ca8018fccd5c01\": rpc error: code = NotFound desc = could not find container \"ce8670efa17b1f37f8d47e1d9af9be01a89c169b23ab48d950ca8018fccd5c01\": container with ID starting with ce8670efa17b1f37f8d47e1d9af9be01a89c169b23ab48d950ca8018fccd5c01 not found: ID does not exist" Feb 21 00:22:14 crc kubenswrapper[4730]: E0221 00:22:14.696855 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.786997 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e71de9a-c117-4fee-990d-fba271f44d6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e71de9a-c117-4fee-990d-fba271f44d6a" (UID: "4e71de9a-c117-4fee-990d-fba271f44d6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.857015 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e71de9a-c117-4fee-990d-fba271f44d6a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.872933 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhc6v"] Feb 21 00:22:14 crc kubenswrapper[4730]: I0221 00:22:14.880539 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bhc6v"] Feb 21 00:22:16 crc kubenswrapper[4730]: I0221 00:22:16.705703 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e71de9a-c117-4fee-990d-fba271f44d6a" path="/var/lib/kubelet/pods/4e71de9a-c117-4fee-990d-fba271f44d6a/volumes" Feb 21 00:22:24 crc kubenswrapper[4730]: I0221 00:22:24.322656 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:22:24 crc kubenswrapper[4730]: I0221 00:22:24.323427 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:22:24 crc kubenswrapper[4730]: I0221 00:22:24.323492 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:22:24 crc kubenswrapper[4730]: I0221 00:22:24.324360 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9383ad3da6d328a8ded67c0993ce13eb48256c8fcdbb6c6d147064699b8b3ef2"} pod="openshift-machine-config-operator/machine-config-daemon-plgd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 00:22:24 crc kubenswrapper[4730]: I0221 00:22:24.324454 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" containerID="cri-o://9383ad3da6d328a8ded67c0993ce13eb48256c8fcdbb6c6d147064699b8b3ef2" gracePeriod=600 Feb 21 00:22:24 crc kubenswrapper[4730]: I0221 00:22:24.608773 4730 generic.go:334] "Generic (PLEG): container finished" podID="7622a560-9120-4202-b95a-246a806fe889" containerID="9383ad3da6d328a8ded67c0993ce13eb48256c8fcdbb6c6d147064699b8b3ef2" exitCode=0 Feb 21 00:22:24 crc kubenswrapper[4730]: I0221 00:22:24.608834 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerDied","Data":"9383ad3da6d328a8ded67c0993ce13eb48256c8fcdbb6c6d147064699b8b3ef2"} Feb 21 00:22:24 crc kubenswrapper[4730]: I0221 00:22:24.609137 4730 scope.go:117] "RemoveContainer" containerID="e71ad00d652cbb89b58230b98021feaed326da2f441dcb2233defaa9931e5127" Feb 21 00:22:25 crc kubenswrapper[4730]: I0221 00:22:25.620901 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerStarted","Data":"6f182b92761156ebebc5a85fa8a956355c307da34d9efadb1a0009ad4d65acef"} Feb 21 00:22:25 crc kubenswrapper[4730]: E0221 00:22:25.695704 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:22:39 crc kubenswrapper[4730]: E0221 00:22:39.696341 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:22:53 crc kubenswrapper[4730]: E0221 00:22:53.695309 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:23:05 crc kubenswrapper[4730]: E0221 00:23:05.696586 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:23:18 crc kubenswrapper[4730]: E0221 00:23:18.701059 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:23:30 crc kubenswrapper[4730]: E0221 00:23:30.695522 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:23:41 crc kubenswrapper[4730]: E0221 00:23:41.695868 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:23:56 crc kubenswrapper[4730]: E0221 00:23:56.694709 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:24:10 crc kubenswrapper[4730]: E0221 00:24:10.696227 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:24:24 crc kubenswrapper[4730]: I0221 00:24:24.322565 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:24:24 crc kubenswrapper[4730]: I0221 00:24:24.324070 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:24:25 crc kubenswrapper[4730]: E0221 00:24:25.695398 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:24:38 crc kubenswrapper[4730]: E0221 00:24:38.752494 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:24:38 crc kubenswrapper[4730]: E0221 00:24:38.753217 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jrd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wffsm_service-telemetry(8e54debe-02a8-4a6a-a22a-9403776c881f): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:24:38 crc kubenswrapper[4730]: E0221 00:24:38.754460 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:24:48 crc kubenswrapper[4730]: I0221 00:24:48.047209 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-l4hk5"] Feb 21 00:24:48 crc kubenswrapper[4730]: E0221 00:24:48.049028 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e71de9a-c117-4fee-990d-fba271f44d6a" containerName="extract-content" Feb 21 00:24:48 crc kubenswrapper[4730]: I0221 00:24:48.049144 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e71de9a-c117-4fee-990d-fba271f44d6a" containerName="extract-content" Feb 21 00:24:48 crc kubenswrapper[4730]: E0221 00:24:48.049181 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e71de9a-c117-4fee-990d-fba271f44d6a" containerName="extract-utilities" Feb 21 00:24:48 crc kubenswrapper[4730]: I0221 00:24:48.049244 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e71de9a-c117-4fee-990d-fba271f44d6a" containerName="extract-utilities" Feb 21 00:24:48 crc kubenswrapper[4730]: E0221 00:24:48.049284 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e71de9a-c117-4fee-990d-fba271f44d6a" containerName="registry-server" Feb 21 00:24:48 crc kubenswrapper[4730]: I0221 00:24:48.049301 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e71de9a-c117-4fee-990d-fba271f44d6a" containerName="registry-server" Feb 21 00:24:48 crc kubenswrapper[4730]: I0221 00:24:48.049534 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e71de9a-c117-4fee-990d-fba271f44d6a" containerName="registry-server" Feb 21 00:24:48 crc kubenswrapper[4730]: I0221 00:24:48.050398 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-l4hk5" Feb 21 00:24:48 crc kubenswrapper[4730]: I0221 00:24:48.053379 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-l4hk5"] Feb 21 00:24:48 crc kubenswrapper[4730]: I0221 00:24:48.119757 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc7xh\" (UniqueName: \"kubernetes.io/projected/60b7c706-1a74-4336-ad31-890d9228ae69-kube-api-access-kc7xh\") pod \"infrawatch-operators-l4hk5\" (UID: \"60b7c706-1a74-4336-ad31-890d9228ae69\") " pod="service-telemetry/infrawatch-operators-l4hk5" Feb 21 00:24:48 crc kubenswrapper[4730]: I0221 00:24:48.221576 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc7xh\" (UniqueName: \"kubernetes.io/projected/60b7c706-1a74-4336-ad31-890d9228ae69-kube-api-access-kc7xh\") pod \"infrawatch-operators-l4hk5\" (UID: \"60b7c706-1a74-4336-ad31-890d9228ae69\") " pod="service-telemetry/infrawatch-operators-l4hk5" Feb 21 00:24:48 crc kubenswrapper[4730]: I0221 00:24:48.262857 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc7xh\" (UniqueName: \"kubernetes.io/projected/60b7c706-1a74-4336-ad31-890d9228ae69-kube-api-access-kc7xh\") pod \"infrawatch-operators-l4hk5\" (UID: \"60b7c706-1a74-4336-ad31-890d9228ae69\") " pod="service-telemetry/infrawatch-operators-l4hk5" Feb 21 00:24:48 crc kubenswrapper[4730]: I0221 00:24:48.382839 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-l4hk5" Feb 21 00:24:48 crc kubenswrapper[4730]: I0221 00:24:48.724731 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-l4hk5"] Feb 21 00:24:48 crc kubenswrapper[4730]: E0221 00:24:48.777470 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:24:48 crc kubenswrapper[4730]: E0221 00:24:48.778047 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kc7xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-l4hk5_service-telemetry(60b7c706-1a74-4336-ad31-890d9228ae69): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:24:48 crc kubenswrapper[4730]: E0221 00:24:48.779322 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:24:49 crc kubenswrapper[4730]: I0221 00:24:49.583523 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-l4hk5" event={"ID":"60b7c706-1a74-4336-ad31-890d9228ae69","Type":"ContainerStarted","Data":"c491edc0470e5c1d91ae0743e4bc69aff62bfa1d78d057fe7cda7d228c209eed"} Feb 21 00:24:49 crc kubenswrapper[4730]: E0221 00:24:49.585548 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:24:50 crc kubenswrapper[4730]: E0221 00:24:50.592888 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:24:50 crc kubenswrapper[4730]: E0221 00:24:50.696716 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:24:54 crc kubenswrapper[4730]: I0221 00:24:54.323561 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:24:54 crc kubenswrapper[4730]: I0221 00:24:54.324273 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:25:00 crc kubenswrapper[4730]: E0221 00:25:00.740217 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:25:00 crc kubenswrapper[4730]: E0221 00:25:00.741183 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kc7xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-l4hk5_service-telemetry(60b7c706-1a74-4336-ad31-890d9228ae69): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:25:00 crc kubenswrapper[4730]: E0221 00:25:00.742558 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:25:05 crc kubenswrapper[4730]: E0221 00:25:05.695778 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:25:11 crc kubenswrapper[4730]: E0221 00:25:11.695304 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:25:16 crc kubenswrapper[4730]: E0221 00:25:16.694778 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:25:23 crc kubenswrapper[4730]: E0221 00:25:23.748731 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:25:23 crc kubenswrapper[4730]: E0221 00:25:23.750870 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kc7xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-l4hk5_service-telemetry(60b7c706-1a74-4336-ad31-890d9228ae69): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:25:23 crc kubenswrapper[4730]: E0221 00:25:23.752628 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:25:24 crc kubenswrapper[4730]: I0221 00:25:24.322800 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:25:24 crc kubenswrapper[4730]: I0221 00:25:24.322881 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:25:24 crc kubenswrapper[4730]: I0221 00:25:24.322971 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:25:24 crc kubenswrapper[4730]: I0221 00:25:24.323751 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f182b92761156ebebc5a85fa8a956355c307da34d9efadb1a0009ad4d65acef"} pod="openshift-machine-config-operator/machine-config-daemon-plgd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 00:25:24 crc kubenswrapper[4730]: I0221 00:25:24.323890 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" containerID="cri-o://6f182b92761156ebebc5a85fa8a956355c307da34d9efadb1a0009ad4d65acef" gracePeriod=600 Feb 21 00:25:24 crc kubenswrapper[4730]: I0221 00:25:24.823876 4730 generic.go:334] "Generic (PLEG): container finished" podID="7622a560-9120-4202-b95a-246a806fe889" containerID="6f182b92761156ebebc5a85fa8a956355c307da34d9efadb1a0009ad4d65acef" exitCode=0 Feb 21 00:25:24 crc kubenswrapper[4730]: I0221 00:25:24.824051 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerDied","Data":"6f182b92761156ebebc5a85fa8a956355c307da34d9efadb1a0009ad4d65acef"} Feb 21 00:25:24 crc kubenswrapper[4730]: I0221 00:25:24.824256 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerStarted","Data":"f34494c63b57a696f93527ec5f0b24bbb9ba8d3fbd531211e18a64a3a5d46ccc"} Feb 21 00:25:24 crc kubenswrapper[4730]: I0221 00:25:24.824274 4730 scope.go:117] "RemoveContainer" containerID="9383ad3da6d328a8ded67c0993ce13eb48256c8fcdbb6c6d147064699b8b3ef2" Feb 21 00:25:31 crc kubenswrapper[4730]: E0221 00:25:31.696062 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:25:36 crc kubenswrapper[4730]: E0221 00:25:36.696111 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:25:45 crc kubenswrapper[4730]: E0221 00:25:45.694809 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:25:49 crc kubenswrapper[4730]: E0221 00:25:49.695995 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:25:56 crc kubenswrapper[4730]: E0221 00:25:56.696736 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:26:02 crc kubenswrapper[4730]: E0221 00:26:02.694585 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:26:07 crc kubenswrapper[4730]: E0221 00:26:07.696363 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:26:15 crc kubenswrapper[4730]: E0221 00:26:15.735103 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:26:15 crc kubenswrapper[4730]: E0221 00:26:15.735775 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kc7xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-l4hk5_service-telemetry(60b7c706-1a74-4336-ad31-890d9228ae69): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:26:15 crc kubenswrapper[4730]: E0221 00:26:15.737011 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:26:22 crc kubenswrapper[4730]: E0221 00:26:22.695865 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:26:28 crc kubenswrapper[4730]: E0221 00:26:28.698246 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:26:35 crc kubenswrapper[4730]: E0221 00:26:35.695645 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:26:42 crc kubenswrapper[4730]: E0221 00:26:42.695473 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:26:48 crc kubenswrapper[4730]: E0221 00:26:48.701383 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:26:54 crc kubenswrapper[4730]: E0221 00:26:54.696621 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:27:02 crc kubenswrapper[4730]: E0221 00:27:02.695043 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:27:07 crc kubenswrapper[4730]: E0221 00:27:07.696400 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:27:17 crc kubenswrapper[4730]: E0221 00:27:17.695212 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:27:18 crc kubenswrapper[4730]: E0221 00:27:18.699511 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:27:24 crc kubenswrapper[4730]: I0221 00:27:24.323350 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:27:24 crc kubenswrapper[4730]: I0221 00:27:24.323745 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:27:30 crc kubenswrapper[4730]: E0221 00:27:30.695775 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:27:30 crc kubenswrapper[4730]: E0221 00:27:30.695811 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:27:42 crc kubenswrapper[4730]: I0221 00:27:42.696576 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 00:27:42 crc kubenswrapper[4730]: E0221 00:27:42.743719 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:27:42 crc kubenswrapper[4730]: E0221 00:27:42.744027 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kc7xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-l4hk5_service-telemetry(60b7c706-1a74-4336-ad31-890d9228ae69): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:27:42 crc kubenswrapper[4730]: E0221 00:27:42.745286 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:27:44 crc kubenswrapper[4730]: E0221 00:27:44.696374 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:27:53 crc kubenswrapper[4730]: E0221 00:27:53.695394 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:27:54 crc kubenswrapper[4730]: I0221 00:27:54.322385 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:27:54 crc kubenswrapper[4730]: I0221 00:27:54.322689 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:27:55 crc kubenswrapper[4730]: E0221 00:27:55.694556 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:28:05 crc kubenswrapper[4730]: E0221 00:28:05.695235 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:28:06 crc kubenswrapper[4730]: E0221 00:28:06.694982 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:28:17 crc kubenswrapper[4730]: E0221 00:28:17.696681 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:28:18 crc kubenswrapper[4730]: E0221 00:28:18.701588 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:28:24 crc kubenswrapper[4730]: I0221 00:28:24.322596 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:28:24 crc kubenswrapper[4730]: I0221 00:28:24.323041 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:28:24 crc kubenswrapper[4730]: I0221 00:28:24.323109 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:28:24 crc kubenswrapper[4730]: I0221 00:28:24.323851 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f34494c63b57a696f93527ec5f0b24bbb9ba8d3fbd531211e18a64a3a5d46ccc"} pod="openshift-machine-config-operator/machine-config-daemon-plgd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 00:28:24 crc kubenswrapper[4730]: I0221 00:28:24.323986 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" containerID="cri-o://f34494c63b57a696f93527ec5f0b24bbb9ba8d3fbd531211e18a64a3a5d46ccc" gracePeriod=600 Feb 21 00:28:25 crc kubenswrapper[4730]: I0221 00:28:25.089369 4730 generic.go:334] "Generic (PLEG): container finished" podID="7622a560-9120-4202-b95a-246a806fe889" containerID="f34494c63b57a696f93527ec5f0b24bbb9ba8d3fbd531211e18a64a3a5d46ccc" exitCode=0 Feb 21 00:28:25 crc kubenswrapper[4730]: I0221 00:28:25.089453 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerDied","Data":"f34494c63b57a696f93527ec5f0b24bbb9ba8d3fbd531211e18a64a3a5d46ccc"} Feb 21 00:28:25 crc kubenswrapper[4730]: I0221 00:28:25.089998 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerStarted","Data":"ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2"} Feb 21 00:28:25 crc kubenswrapper[4730]: I0221 00:28:25.090021 4730 scope.go:117] "RemoveContainer" containerID="6f182b92761156ebebc5a85fa8a956355c307da34d9efadb1a0009ad4d65acef" Feb 21 00:28:32 crc kubenswrapper[4730]: E0221 00:28:32.695600 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:28:33 crc kubenswrapper[4730]: E0221 00:28:33.696412 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:28:44 crc kubenswrapper[4730]: E0221 00:28:44.700740 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:28:47 crc kubenswrapper[4730]: E0221 00:28:47.695211 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:28:58 crc kubenswrapper[4730]: E0221 00:28:58.701731 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:29:00 crc kubenswrapper[4730]: E0221 00:29:00.695598 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:29:11 crc kubenswrapper[4730]: E0221 00:29:11.695579 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:29:12 crc kubenswrapper[4730]: E0221 00:29:12.695866 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:29:23 crc kubenswrapper[4730]: E0221 00:29:23.696284 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:29:23 crc kubenswrapper[4730]: E0221 00:29:23.696551 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:29:36 crc kubenswrapper[4730]: E0221 00:29:36.697246 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:29:36 crc kubenswrapper[4730]: E0221 00:29:36.697257 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:29:48 crc kubenswrapper[4730]: E0221 00:29:48.752584 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:29:48 crc kubenswrapper[4730]: E0221 00:29:48.753587 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jrd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wffsm_service-telemetry(8e54debe-02a8-4a6a-a22a-9403776c881f): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:29:48 crc kubenswrapper[4730]: E0221 00:29:48.754929 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:29:51 crc kubenswrapper[4730]: E0221 00:29:51.694420 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.138653 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq"] Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.139811 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.142455 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.142668 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.155062 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq"] Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.208077 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg696\" (UniqueName: \"kubernetes.io/projected/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-kube-api-access-gg696\") pod \"collect-profiles-29527230-zqfgq\" (UID: \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.208133 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-config-volume\") pod \"collect-profiles-29527230-zqfgq\" (UID: \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.208216 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-secret-volume\") pod \"collect-profiles-29527230-zqfgq\" (UID: \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.309696 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-secret-volume\") pod \"collect-profiles-29527230-zqfgq\" (UID: \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.309810 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg696\" (UniqueName: \"kubernetes.io/projected/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-kube-api-access-gg696\") pod \"collect-profiles-29527230-zqfgq\" (UID: \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.309840 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-config-volume\") pod \"collect-profiles-29527230-zqfgq\" (UID: \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.310991 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-config-volume\") pod \"collect-profiles-29527230-zqfgq\" (UID: \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.315513 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-secret-volume\") pod \"collect-profiles-29527230-zqfgq\" (UID: \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.324581 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg696\" (UniqueName: \"kubernetes.io/projected/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-kube-api-access-gg696\") pod \"collect-profiles-29527230-zqfgq\" (UID: \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.470290 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" Feb 21 00:30:00 crc kubenswrapper[4730]: E0221 00:30:00.704277 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:30:00 crc kubenswrapper[4730]: I0221 00:30:00.895151 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq"] Feb 21 00:30:01 crc kubenswrapper[4730]: I0221 00:30:01.746695 4730 generic.go:334] "Generic (PLEG): container finished" podID="d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c" containerID="143a76e153e8cb70b8a58b341e8c081af75b145e853009d4ff6ccc4cbca72bc2" exitCode=0 Feb 21 00:30:01 crc kubenswrapper[4730]: I0221 00:30:01.746744 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" event={"ID":"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c","Type":"ContainerDied","Data":"143a76e153e8cb70b8a58b341e8c081af75b145e853009d4ff6ccc4cbca72bc2"} Feb 21 00:30:01 crc kubenswrapper[4730]: I0221 00:30:01.746781 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" event={"ID":"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c","Type":"ContainerStarted","Data":"f945b6dfe9e49ceb38e9abd4c3834245eacea395f7fd97581597352b74b0bd87"} Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.011971 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.146622 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-secret-volume\") pod \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\" (UID: \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\") " Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.146736 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg696\" (UniqueName: \"kubernetes.io/projected/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-kube-api-access-gg696\") pod \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\" (UID: \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\") " Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.146765 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-config-volume\") pod \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\" (UID: \"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c\") " Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.147825 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-config-volume" (OuterVolumeSpecName: "config-volume") pod "d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c" (UID: "d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.152307 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c" (UID: "d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.153872 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-kube-api-access-gg696" (OuterVolumeSpecName: "kube-api-access-gg696") pod "d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c" (UID: "d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c"). InnerVolumeSpecName "kube-api-access-gg696". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.248844 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg696\" (UniqueName: \"kubernetes.io/projected/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-kube-api-access-gg696\") on node \"crc\" DevicePath \"\"" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.248887 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.248905 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.697177 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xpn9q/must-gather-bm8wj"] Feb 21 00:30:03 crc kubenswrapper[4730]: E0221 00:30:03.697694 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c" containerName="collect-profiles" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.697705 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c" containerName="collect-profiles" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.697824 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c" containerName="collect-profiles" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.698375 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xpn9q/must-gather-bm8wj" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.700315 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xpn9q"/"openshift-service-ca.crt" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.703824 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xpn9q/must-gather-bm8wj"] Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.726041 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xpn9q"/"kube-root-ca.crt" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.754604 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktzt\" (UniqueName: \"kubernetes.io/projected/2929f3ba-724c-42a3-8b91-d73463d42e27-kube-api-access-hktzt\") pod \"must-gather-bm8wj\" (UID: \"2929f3ba-724c-42a3-8b91-d73463d42e27\") " pod="openshift-must-gather-xpn9q/must-gather-bm8wj" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.754713 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2929f3ba-724c-42a3-8b91-d73463d42e27-must-gather-output\") pod \"must-gather-bm8wj\" (UID: \"2929f3ba-724c-42a3-8b91-d73463d42e27\") " pod="openshift-must-gather-xpn9q/must-gather-bm8wj" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.759367 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" event={"ID":"d8ccd35a-2a02-45c2-9bcb-bec9a103bf6c","Type":"ContainerDied","Data":"f945b6dfe9e49ceb38e9abd4c3834245eacea395f7fd97581597352b74b0bd87"} Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.759399 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f945b6dfe9e49ceb38e9abd4c3834245eacea395f7fd97581597352b74b0bd87" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.759441 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527230-zqfgq" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.856134 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2929f3ba-724c-42a3-8b91-d73463d42e27-must-gather-output\") pod \"must-gather-bm8wj\" (UID: \"2929f3ba-724c-42a3-8b91-d73463d42e27\") " pod="openshift-must-gather-xpn9q/must-gather-bm8wj" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.856216 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktzt\" (UniqueName: \"kubernetes.io/projected/2929f3ba-724c-42a3-8b91-d73463d42e27-kube-api-access-hktzt\") pod \"must-gather-bm8wj\" (UID: \"2929f3ba-724c-42a3-8b91-d73463d42e27\") " pod="openshift-must-gather-xpn9q/must-gather-bm8wj" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.856612 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2929f3ba-724c-42a3-8b91-d73463d42e27-must-gather-output\") pod \"must-gather-bm8wj\" (UID: \"2929f3ba-724c-42a3-8b91-d73463d42e27\") " pod="openshift-must-gather-xpn9q/must-gather-bm8wj" Feb 21 00:30:03 crc kubenswrapper[4730]: I0221 00:30:03.880892 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hktzt\" (UniqueName: \"kubernetes.io/projected/2929f3ba-724c-42a3-8b91-d73463d42e27-kube-api-access-hktzt\") pod \"must-gather-bm8wj\" (UID: \"2929f3ba-724c-42a3-8b91-d73463d42e27\") " pod="openshift-must-gather-xpn9q/must-gather-bm8wj" Feb 21 00:30:04 crc kubenswrapper[4730]: I0221 00:30:04.016208 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xpn9q/must-gather-bm8wj" Feb 21 00:30:04 crc kubenswrapper[4730]: I0221 00:30:04.288581 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xpn9q/must-gather-bm8wj"] Feb 21 00:30:04 crc kubenswrapper[4730]: I0221 00:30:04.766571 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xpn9q/must-gather-bm8wj" event={"ID":"2929f3ba-724c-42a3-8b91-d73463d42e27","Type":"ContainerStarted","Data":"060ea547e8dbd85d3ae276c59c1e8bcfeb6f15a96ff58203840dbdaef0e3be10"} Feb 21 00:30:06 crc kubenswrapper[4730]: E0221 00:30:06.698335 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:30:10 crc kubenswrapper[4730]: I0221 00:30:10.796662 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xpn9q/must-gather-bm8wj" event={"ID":"2929f3ba-724c-42a3-8b91-d73463d42e27","Type":"ContainerStarted","Data":"95aa18870d4ed934449bce6133f18764bb45d3ec0b36cf52b23ed21efadaa782"} Feb 21 00:30:10 crc kubenswrapper[4730]: I0221 00:30:10.797356 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xpn9q/must-gather-bm8wj" event={"ID":"2929f3ba-724c-42a3-8b91-d73463d42e27","Type":"ContainerStarted","Data":"3726560dcea6f612e10c8c4e8200fc58355fa25e04a9405d8e9c7181ac337a9b"} Feb 21 00:30:10 crc kubenswrapper[4730]: I0221 00:30:10.812205 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xpn9q/must-gather-bm8wj" podStartSLOduration=1.7766196380000001 podStartE2EDuration="7.812188756s" podCreationTimestamp="2026-02-21 00:30:03 +0000 UTC" firstStartedPulling="2026-02-21 00:30:04.319039183 +0000 UTC m=+1406.330606118" lastFinishedPulling="2026-02-21 00:30:10.354608311 +0000 UTC m=+1412.366175236" observedRunningTime="2026-02-21 00:30:10.811906279 +0000 UTC m=+1412.823473224" watchObservedRunningTime="2026-02-21 00:30:10.812188756 +0000 UTC m=+1412.823755691" Feb 21 00:30:11 crc kubenswrapper[4730]: E0221 00:30:11.694455 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:30:19 crc kubenswrapper[4730]: E0221 00:30:19.696022 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:30:22 crc kubenswrapper[4730]: E0221 00:30:22.697069 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:30:24 crc kubenswrapper[4730]: I0221 00:30:24.323415 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:30:24 crc kubenswrapper[4730]: I0221 00:30:24.323808 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:30:32 crc kubenswrapper[4730]: E0221 00:30:32.725940 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:30:32 crc kubenswrapper[4730]: E0221 00:30:32.726541 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kc7xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-l4hk5_service-telemetry(60b7c706-1a74-4336-ad31-890d9228ae69): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:30:32 crc kubenswrapper[4730]: E0221 00:30:32.727837 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:30:37 crc kubenswrapper[4730]: E0221 00:30:37.696496 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:30:46 crc kubenswrapper[4730]: E0221 00:30:46.695284 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:30:49 crc kubenswrapper[4730]: E0221 00:30:49.694909 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:30:50 crc kubenswrapper[4730]: I0221 00:30:50.449926 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4mngz_3334b1ca-87b2-436d-a994-15634b8240e2/control-plane-machine-set-operator/0.log" Feb 21 00:30:50 crc kubenswrapper[4730]: I0221 00:30:50.581194 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z7gds_81ed0256-3be9-4994-85ee-f35c6be1bf63/machine-api-operator/0.log" Feb 21 00:30:50 crc kubenswrapper[4730]: I0221 00:30:50.584333 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z7gds_81ed0256-3be9-4994-85ee-f35c6be1bf63/kube-rbac-proxy/0.log" Feb 21 00:30:54 crc kubenswrapper[4730]: I0221 00:30:54.322497 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:30:54 crc kubenswrapper[4730]: I0221 00:30:54.322837 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:31:00 crc kubenswrapper[4730]: E0221 00:31:00.695930 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:31:01 crc kubenswrapper[4730]: E0221 00:31:01.693745 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:31:01 crc kubenswrapper[4730]: I0221 00:31:01.766263 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-vb7r6_dba1903b-ca71-407e-a9f2-4cb9932a636d/cert-manager-controller/0.log" Feb 21 00:31:01 crc kubenswrapper[4730]: I0221 00:31:01.892861 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-jtkvs_611d7c89-5596-472d-8778-3724f0b2f5ea/cert-manager-cainjector/0.log" Feb 21 00:31:01 crc kubenswrapper[4730]: I0221 00:31:01.975920 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-8dfl9_b87a4216-583b-4228-a198-5c3a71ec9184/cert-manager-webhook/0.log" Feb 21 00:31:14 crc kubenswrapper[4730]: I0221 00:31:14.386189 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-xq7rz_1a394e26-92a9-44ea-b193-b9862b976124/prometheus-operator/0.log" Feb 21 00:31:14 crc kubenswrapper[4730]: I0221 00:31:14.505386 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-664c67d4-nkrpd_753fc1e6-7a24-4a3f-8379-da79db56db71/prometheus-operator-admission-webhook/0.log" Feb 21 00:31:14 crc kubenswrapper[4730]: I0221 00:31:14.577555 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-664c67d4-rlrmt_5af548d1-94ce-4f85-b585-ff94f0e6fd62/prometheus-operator-admission-webhook/0.log" Feb 21 00:31:14 crc kubenswrapper[4730]: I0221 00:31:14.702492 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-cs9wn_eed8544f-c759-403e-a3ec-d4e2f2374c80/operator/0.log" Feb 21 00:31:14 crc kubenswrapper[4730]: I0221 00:31:14.750521 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ptzt6_7723fd91-2ebf-45d9-b8a5-07ed9281185a/perses-operator/0.log" Feb 21 00:31:15 crc kubenswrapper[4730]: E0221 00:31:15.695261 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:31:16 crc kubenswrapper[4730]: E0221 00:31:16.695054 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:31:24 crc kubenswrapper[4730]: I0221 00:31:24.322769 4730 patch_prober.go:28] interesting pod/machine-config-daemon-plgd8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 00:31:24 crc kubenswrapper[4730]: I0221 00:31:24.323473 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 00:31:24 crc kubenswrapper[4730]: I0221 00:31:24.323522 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" Feb 21 00:31:24 crc kubenswrapper[4730]: I0221 00:31:24.324054 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2"} pod="openshift-machine-config-operator/machine-config-daemon-plgd8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 00:31:24 crc kubenswrapper[4730]: I0221 00:31:24.324102 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" containerName="machine-config-daemon" containerID="cri-o://ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" gracePeriod=600 Feb 21 00:31:24 crc kubenswrapper[4730]: E0221 00:31:24.450212 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:31:25 crc kubenswrapper[4730]: I0221 00:31:25.226860 4730 generic.go:334] "Generic (PLEG): container finished" podID="7622a560-9120-4202-b95a-246a806fe889" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" exitCode=0 Feb 21 00:31:25 crc kubenswrapper[4730]: I0221 00:31:25.226910 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerDied","Data":"ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2"} Feb 21 00:31:25 crc kubenswrapper[4730]: I0221 00:31:25.226965 4730 scope.go:117] "RemoveContainer" containerID="f34494c63b57a696f93527ec5f0b24bbb9ba8d3fbd531211e18a64a3a5d46ccc" Feb 21 00:31:25 crc kubenswrapper[4730]: I0221 00:31:25.227644 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:31:25 crc kubenswrapper[4730]: E0221 00:31:25.228097 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:31:27 crc kubenswrapper[4730]: I0221 00:31:27.820557 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk_caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51/util/0.log" Feb 21 00:31:27 crc kubenswrapper[4730]: I0221 00:31:27.978920 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk_caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51/util/0.log" Feb 21 00:31:27 crc kubenswrapper[4730]: I0221 00:31:27.993715 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk_caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51/pull/0.log" Feb 21 00:31:28 crc kubenswrapper[4730]: I0221 00:31:28.005473 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk_caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51/pull/0.log" Feb 21 00:31:28 crc kubenswrapper[4730]: I0221 00:31:28.164479 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk_caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51/util/0.log" Feb 21 00:31:28 crc kubenswrapper[4730]: I0221 00:31:28.165093 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk_caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51/pull/0.log" Feb 21 00:31:28 crc kubenswrapper[4730]: I0221 00:31:28.202336 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1d47dk_caa1d8d3-2eb9-4be2-90e0-6e8bf9929c51/extract/0.log" Feb 21 00:31:28 crc kubenswrapper[4730]: I0221 00:31:28.326235 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk_ee2681ce-cbfb-4563-a33b-3da5e5080efb/util/0.log" Feb 21 00:31:28 crc kubenswrapper[4730]: I0221 00:31:28.499688 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk_ee2681ce-cbfb-4563-a33b-3da5e5080efb/util/0.log" Feb 21 00:31:28 crc kubenswrapper[4730]: I0221 00:31:28.511173 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk_ee2681ce-cbfb-4563-a33b-3da5e5080efb/pull/0.log" Feb 21 00:31:28 crc kubenswrapper[4730]: I0221 00:31:28.515592 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk_ee2681ce-cbfb-4563-a33b-3da5e5080efb/pull/0.log" Feb 21 00:31:28 crc kubenswrapper[4730]: I0221 00:31:28.618253 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk_ee2681ce-cbfb-4563-a33b-3da5e5080efb/util/0.log" Feb 21 00:31:28 crc kubenswrapper[4730]: I0221 00:31:28.674478 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk_ee2681ce-cbfb-4563-a33b-3da5e5080efb/extract/0.log" Feb 21 00:31:28 crc kubenswrapper[4730]: E0221 00:31:28.701052 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:31:28 crc kubenswrapper[4730]: I0221 00:31:28.716470 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vrjrk_ee2681ce-cbfb-4563-a33b-3da5e5080efb/pull/0.log" Feb 21 00:31:28 crc kubenswrapper[4730]: I0221 00:31:28.795759 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9_53644c60-af55-4aa2-8ccc-94b0dbb5e4f5/util/0.log" Feb 21 00:31:28 crc kubenswrapper[4730]: I0221 00:31:28.978797 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9_53644c60-af55-4aa2-8ccc-94b0dbb5e4f5/pull/0.log" Feb 21 00:31:29 crc kubenswrapper[4730]: I0221 00:31:29.028324 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9_53644c60-af55-4aa2-8ccc-94b0dbb5e4f5/util/0.log" Feb 21 00:31:29 crc kubenswrapper[4730]: I0221 00:31:29.048075 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9_53644c60-af55-4aa2-8ccc-94b0dbb5e4f5/pull/0.log" Feb 21 00:31:29 crc kubenswrapper[4730]: I0221 00:31:29.165816 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9_53644c60-af55-4aa2-8ccc-94b0dbb5e4f5/util/0.log" Feb 21 00:31:29 crc kubenswrapper[4730]: I0221 00:31:29.192095 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9_53644c60-af55-4aa2-8ccc-94b0dbb5e4f5/pull/0.log" Feb 21 00:31:29 crc kubenswrapper[4730]: I0221 00:31:29.216609 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jqgr9_53644c60-af55-4aa2-8ccc-94b0dbb5e4f5/extract/0.log" Feb 21 00:31:29 crc kubenswrapper[4730]: I0221 00:31:29.339198 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nhxt_a43d1320-32b1-4d4d-a815-263b30821c6a/extract-utilities/0.log" Feb 21 00:31:29 crc kubenswrapper[4730]: I0221 00:31:29.526571 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nhxt_a43d1320-32b1-4d4d-a815-263b30821c6a/extract-utilities/0.log" Feb 21 00:31:29 crc kubenswrapper[4730]: I0221 00:31:29.551367 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nhxt_a43d1320-32b1-4d4d-a815-263b30821c6a/extract-content/0.log" Feb 21 00:31:29 crc kubenswrapper[4730]: I0221 00:31:29.553789 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nhxt_a43d1320-32b1-4d4d-a815-263b30821c6a/extract-content/0.log" Feb 21 00:31:29 crc kubenswrapper[4730]: I0221 00:31:29.712781 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nhxt_a43d1320-32b1-4d4d-a815-263b30821c6a/extract-content/0.log" Feb 21 00:31:29 crc kubenswrapper[4730]: I0221 00:31:29.734646 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nhxt_a43d1320-32b1-4d4d-a815-263b30821c6a/extract-utilities/0.log" Feb 21 00:31:29 crc kubenswrapper[4730]: I0221 00:31:29.884388 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh9g8_f2ccbf5c-7182-4185-b59e-5d43e2fd29c6/extract-utilities/0.log" Feb 21 00:31:29 crc kubenswrapper[4730]: I0221 00:31:29.906385 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nhxt_a43d1320-32b1-4d4d-a815-263b30821c6a/registry-server/0.log" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.071310 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh9g8_f2ccbf5c-7182-4185-b59e-5d43e2fd29c6/extract-utilities/0.log" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.075397 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh9g8_f2ccbf5c-7182-4185-b59e-5d43e2fd29c6/extract-content/0.log" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.091382 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh9g8_f2ccbf5c-7182-4185-b59e-5d43e2fd29c6/extract-content/0.log" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.255751 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh9g8_f2ccbf5c-7182-4185-b59e-5d43e2fd29c6/extract-utilities/0.log" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.258413 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh9g8_f2ccbf5c-7182-4185-b59e-5d43e2fd29c6/extract-content/0.log" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.451249 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lh9g8_f2ccbf5c-7182-4185-b59e-5d43e2fd29c6/registry-server/0.log" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.480857 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-snqnv_69773a41-0e64-40ec-913e-be1b7abff235/marketplace-operator/0.log" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.538021 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56pck_a096d4c9-914f-4529-9fd9-6e699e91ab00/extract-utilities/0.log" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.682100 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c2t9n"] Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.683126 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.692732 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2t9n"] Feb 21 00:31:30 crc kubenswrapper[4730]: E0221 00:31:30.697299 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.736268 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56pck_a096d4c9-914f-4529-9fd9-6e699e91ab00/extract-content/0.log" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.742195 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56pck_a096d4c9-914f-4529-9fd9-6e699e91ab00/extract-content/0.log" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.797466 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56pck_a096d4c9-914f-4529-9fd9-6e699e91ab00/extract-utilities/0.log" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.855370 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-utilities\") pod \"community-operators-c2t9n\" (UID: \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\") " pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.855433 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-catalog-content\") pod \"community-operators-c2t9n\" (UID: \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\") " pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.855646 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ghwx\" (UniqueName: \"kubernetes.io/projected/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-kube-api-access-8ghwx\") pod \"community-operators-c2t9n\" (UID: \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\") " pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.935383 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56pck_a096d4c9-914f-4529-9fd9-6e699e91ab00/extract-utilities/0.log" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.957351 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-utilities\") pod \"community-operators-c2t9n\" (UID: \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\") " pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.957406 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-catalog-content\") pod \"community-operators-c2t9n\" (UID: \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\") " pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.957463 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ghwx\" (UniqueName: \"kubernetes.io/projected/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-kube-api-access-8ghwx\") pod \"community-operators-c2t9n\" (UID: \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\") " pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.957858 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-utilities\") pod \"community-operators-c2t9n\" (UID: \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\") " pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.957924 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-catalog-content\") pod \"community-operators-c2t9n\" (UID: \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\") " pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.981971 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ghwx\" (UniqueName: \"kubernetes.io/projected/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-kube-api-access-8ghwx\") pod \"community-operators-c2t9n\" (UID: \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\") " pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:30 crc kubenswrapper[4730]: I0221 00:31:30.996727 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:31 crc kubenswrapper[4730]: I0221 00:31:31.100247 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56pck_a096d4c9-914f-4529-9fd9-6e699e91ab00/registry-server/0.log" Feb 21 00:31:31 crc kubenswrapper[4730]: I0221 00:31:31.143413 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-56pck_a096d4c9-914f-4529-9fd9-6e699e91ab00/extract-content/0.log" Feb 21 00:31:31 crc kubenswrapper[4730]: I0221 00:31:31.305008 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2t9n"] Feb 21 00:31:32 crc kubenswrapper[4730]: I0221 00:31:32.277724 4730 generic.go:334] "Generic (PLEG): container finished" podID="7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" containerID="7741043a55366f121e6b58f065189dc70aa4dffd15ad431754c8851fd1031537" exitCode=0 Feb 21 00:31:32 crc kubenswrapper[4730]: I0221 00:31:32.277785 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2t9n" event={"ID":"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c","Type":"ContainerDied","Data":"7741043a55366f121e6b58f065189dc70aa4dffd15ad431754c8851fd1031537"} Feb 21 00:31:32 crc kubenswrapper[4730]: I0221 00:31:32.278178 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2t9n" event={"ID":"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c","Type":"ContainerStarted","Data":"996d029d74d05a21e639e57e6f55d24be22eadfe834a74704b2c234c6c3c2a92"} Feb 21 00:31:33 crc kubenswrapper[4730]: I0221 00:31:33.287370 4730 generic.go:334] "Generic (PLEG): container finished" podID="7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" containerID="a4371aa6a17de6a3ee37bf8e9865c97f405026817a17e16e474737c1b9a90928" exitCode=0 Feb 21 00:31:33 crc kubenswrapper[4730]: I0221 00:31:33.287457 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2t9n" event={"ID":"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c","Type":"ContainerDied","Data":"a4371aa6a17de6a3ee37bf8e9865c97f405026817a17e16e474737c1b9a90928"} Feb 21 00:31:34 crc kubenswrapper[4730]: I0221 00:31:34.298561 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2t9n" event={"ID":"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c","Type":"ContainerStarted","Data":"7d30f46709c02358757f6f0e3180903dbd4ea7d3a3a7c559ac14ce22326bd470"} Feb 21 00:31:34 crc kubenswrapper[4730]: I0221 00:31:34.320542 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c2t9n" podStartSLOduration=2.9290846 podStartE2EDuration="4.32052539s" podCreationTimestamp="2026-02-21 00:31:30 +0000 UTC" firstStartedPulling="2026-02-21 00:31:32.280461373 +0000 UTC m=+1494.292028318" lastFinishedPulling="2026-02-21 00:31:33.671902183 +0000 UTC m=+1495.683469108" observedRunningTime="2026-02-21 00:31:34.316921163 +0000 UTC m=+1496.328488098" watchObservedRunningTime="2026-02-21 00:31:34.32052539 +0000 UTC m=+1496.332092325" Feb 21 00:31:38 crc kubenswrapper[4730]: I0221 00:31:38.698495 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:31:38 crc kubenswrapper[4730]: E0221 00:31:38.699302 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:31:40 crc kubenswrapper[4730]: I0221 00:31:40.996915 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:40 crc kubenswrapper[4730]: I0221 00:31:40.998689 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:41 crc kubenswrapper[4730]: I0221 00:31:41.052730 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:41 crc kubenswrapper[4730]: I0221 00:31:41.385584 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:41 crc kubenswrapper[4730]: I0221 00:31:41.435707 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2t9n"] Feb 21 00:31:42 crc kubenswrapper[4730]: E0221 00:31:42.694660 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:31:42 crc kubenswrapper[4730]: E0221 00:31:42.694783 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.037009 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-664c67d4-nkrpd_753fc1e6-7a24-4a3f-8379-da79db56db71/prometheus-operator-admission-webhook/0.log" Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.058400 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-664c67d4-rlrmt_5af548d1-94ce-4f85-b585-ff94f0e6fd62/prometheus-operator-admission-webhook/0.log" Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.060234 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-xq7rz_1a394e26-92a9-44ea-b193-b9862b976124/prometheus-operator/0.log" Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.159614 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-cs9wn_eed8544f-c759-403e-a3ec-d4e2f2374c80/operator/0.log" Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.198543 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ptzt6_7723fd91-2ebf-45d9-b8a5-07ed9281185a/perses-operator/0.log" Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.351606 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c2t9n" podUID="7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" containerName="registry-server" containerID="cri-o://7d30f46709c02358757f6f0e3180903dbd4ea7d3a3a7c559ac14ce22326bd470" gracePeriod=2 Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.698666 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.858095 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ghwx\" (UniqueName: \"kubernetes.io/projected/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-kube-api-access-8ghwx\") pod \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\" (UID: \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\") " Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.858146 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-utilities\") pod \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\" (UID: \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\") " Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.858231 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-catalog-content\") pod \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\" (UID: \"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c\") " Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.859080 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-utilities" (OuterVolumeSpecName: "utilities") pod "7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" (UID: "7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.866088 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-kube-api-access-8ghwx" (OuterVolumeSpecName: "kube-api-access-8ghwx") pod "7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" (UID: "7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c"). InnerVolumeSpecName "kube-api-access-8ghwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.925042 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" (UID: "7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.959474 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.959517 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ghwx\" (UniqueName: \"kubernetes.io/projected/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-kube-api-access-8ghwx\") on node \"crc\" DevicePath \"\"" Feb 21 00:31:43 crc kubenswrapper[4730]: I0221 00:31:43.959529 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.361016 4730 generic.go:334] "Generic (PLEG): container finished" podID="7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" containerID="7d30f46709c02358757f6f0e3180903dbd4ea7d3a3a7c559ac14ce22326bd470" exitCode=0 Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.361106 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2t9n" Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.361133 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2t9n" event={"ID":"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c","Type":"ContainerDied","Data":"7d30f46709c02358757f6f0e3180903dbd4ea7d3a3a7c559ac14ce22326bd470"} Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.361580 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2t9n" event={"ID":"7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c","Type":"ContainerDied","Data":"996d029d74d05a21e639e57e6f55d24be22eadfe834a74704b2c234c6c3c2a92"} Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.361621 4730 scope.go:117] "RemoveContainer" containerID="7d30f46709c02358757f6f0e3180903dbd4ea7d3a3a7c559ac14ce22326bd470" Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.382052 4730 scope.go:117] "RemoveContainer" containerID="a4371aa6a17de6a3ee37bf8e9865c97f405026817a17e16e474737c1b9a90928" Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.411779 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2t9n"] Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.420813 4730 scope.go:117] "RemoveContainer" containerID="7741043a55366f121e6b58f065189dc70aa4dffd15ad431754c8851fd1031537" Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.425004 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c2t9n"] Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.441901 4730 scope.go:117] "RemoveContainer" containerID="7d30f46709c02358757f6f0e3180903dbd4ea7d3a3a7c559ac14ce22326bd470" Feb 21 00:31:44 crc kubenswrapper[4730]: E0221 00:31:44.442526 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d30f46709c02358757f6f0e3180903dbd4ea7d3a3a7c559ac14ce22326bd470\": container with ID starting with 7d30f46709c02358757f6f0e3180903dbd4ea7d3a3a7c559ac14ce22326bd470 not found: ID does not exist" containerID="7d30f46709c02358757f6f0e3180903dbd4ea7d3a3a7c559ac14ce22326bd470" Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.442578 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d30f46709c02358757f6f0e3180903dbd4ea7d3a3a7c559ac14ce22326bd470"} err="failed to get container status \"7d30f46709c02358757f6f0e3180903dbd4ea7d3a3a7c559ac14ce22326bd470\": rpc error: code = NotFound desc = could not find container \"7d30f46709c02358757f6f0e3180903dbd4ea7d3a3a7c559ac14ce22326bd470\": container with ID starting with 7d30f46709c02358757f6f0e3180903dbd4ea7d3a3a7c559ac14ce22326bd470 not found: ID does not exist" Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.442610 4730 scope.go:117] "RemoveContainer" containerID="a4371aa6a17de6a3ee37bf8e9865c97f405026817a17e16e474737c1b9a90928" Feb 21 00:31:44 crc kubenswrapper[4730]: E0221 00:31:44.443049 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4371aa6a17de6a3ee37bf8e9865c97f405026817a17e16e474737c1b9a90928\": container with ID starting with a4371aa6a17de6a3ee37bf8e9865c97f405026817a17e16e474737c1b9a90928 not found: ID does not exist" containerID="a4371aa6a17de6a3ee37bf8e9865c97f405026817a17e16e474737c1b9a90928" Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.443089 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4371aa6a17de6a3ee37bf8e9865c97f405026817a17e16e474737c1b9a90928"} err="failed to get container status \"a4371aa6a17de6a3ee37bf8e9865c97f405026817a17e16e474737c1b9a90928\": rpc error: code = NotFound desc = could not find container \"a4371aa6a17de6a3ee37bf8e9865c97f405026817a17e16e474737c1b9a90928\": container with ID starting with a4371aa6a17de6a3ee37bf8e9865c97f405026817a17e16e474737c1b9a90928 not found: ID does not exist" Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.443114 4730 scope.go:117] "RemoveContainer" containerID="7741043a55366f121e6b58f065189dc70aa4dffd15ad431754c8851fd1031537" Feb 21 00:31:44 crc kubenswrapper[4730]: E0221 00:31:44.443330 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7741043a55366f121e6b58f065189dc70aa4dffd15ad431754c8851fd1031537\": container with ID starting with 7741043a55366f121e6b58f065189dc70aa4dffd15ad431754c8851fd1031537 not found: ID does not exist" containerID="7741043a55366f121e6b58f065189dc70aa4dffd15ad431754c8851fd1031537" Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.443359 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7741043a55366f121e6b58f065189dc70aa4dffd15ad431754c8851fd1031537"} err="failed to get container status \"7741043a55366f121e6b58f065189dc70aa4dffd15ad431754c8851fd1031537\": rpc error: code = NotFound desc = could not find container \"7741043a55366f121e6b58f065189dc70aa4dffd15ad431754c8851fd1031537\": container with ID starting with 7741043a55366f121e6b58f065189dc70aa4dffd15ad431754c8851fd1031537 not found: ID does not exist" Feb 21 00:31:44 crc kubenswrapper[4730]: I0221 00:31:44.704205 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" path="/var/lib/kubelet/pods/7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c/volumes" Feb 21 00:31:53 crc kubenswrapper[4730]: I0221 00:31:53.695230 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:31:53 crc kubenswrapper[4730]: E0221 00:31:53.696140 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:31:53 crc kubenswrapper[4730]: E0221 00:31:53.699335 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:31:55 crc kubenswrapper[4730]: E0221 00:31:55.695878 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:32:04 crc kubenswrapper[4730]: E0221 00:32:04.696147 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:32:08 crc kubenswrapper[4730]: I0221 00:32:08.699135 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:32:08 crc kubenswrapper[4730]: E0221 00:32:08.699744 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:32:08 crc kubenswrapper[4730]: E0221 00:32:08.701050 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:32:19 crc kubenswrapper[4730]: E0221 00:32:19.695467 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:32:20 crc kubenswrapper[4730]: I0221 00:32:20.696445 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:32:20 crc kubenswrapper[4730]: E0221 00:32:20.697004 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:32:21 crc kubenswrapper[4730]: E0221 00:32:21.695091 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:32:30 crc kubenswrapper[4730]: E0221 00:32:30.696414 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:32:31 crc kubenswrapper[4730]: I0221 00:32:31.709261 4730 generic.go:334] "Generic (PLEG): container finished" podID="2929f3ba-724c-42a3-8b91-d73463d42e27" containerID="3726560dcea6f612e10c8c4e8200fc58355fa25e04a9405d8e9c7181ac337a9b" exitCode=0 Feb 21 00:32:31 crc kubenswrapper[4730]: I0221 00:32:31.709296 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xpn9q/must-gather-bm8wj" event={"ID":"2929f3ba-724c-42a3-8b91-d73463d42e27","Type":"ContainerDied","Data":"3726560dcea6f612e10c8c4e8200fc58355fa25e04a9405d8e9c7181ac337a9b"} Feb 21 00:32:31 crc kubenswrapper[4730]: I0221 00:32:31.709610 4730 scope.go:117] "RemoveContainer" containerID="3726560dcea6f612e10c8c4e8200fc58355fa25e04a9405d8e9c7181ac337a9b" Feb 21 00:32:32 crc kubenswrapper[4730]: E0221 00:32:32.693993 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:32:32 crc kubenswrapper[4730]: I0221 00:32:32.714899 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xpn9q_must-gather-bm8wj_2929f3ba-724c-42a3-8b91-d73463d42e27/gather/0.log" Feb 21 00:32:35 crc kubenswrapper[4730]: I0221 00:32:35.693623 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:32:35 crc kubenswrapper[4730]: E0221 00:32:35.694239 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.175501 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xpn9q/must-gather-bm8wj"] Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.175784 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-xpn9q/must-gather-bm8wj" podUID="2929f3ba-724c-42a3-8b91-d73463d42e27" containerName="copy" containerID="cri-o://95aa18870d4ed934449bce6133f18764bb45d3ec0b36cf52b23ed21efadaa782" gracePeriod=2 Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.184880 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xpn9q/must-gather-bm8wj"] Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.518105 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xpn9q_must-gather-bm8wj_2929f3ba-724c-42a3-8b91-d73463d42e27/copy/0.log" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.518909 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xpn9q/must-gather-bm8wj" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.562976 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hktzt\" (UniqueName: \"kubernetes.io/projected/2929f3ba-724c-42a3-8b91-d73463d42e27-kube-api-access-hktzt\") pod \"2929f3ba-724c-42a3-8b91-d73463d42e27\" (UID: \"2929f3ba-724c-42a3-8b91-d73463d42e27\") " Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.563095 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2929f3ba-724c-42a3-8b91-d73463d42e27-must-gather-output\") pod \"2929f3ba-724c-42a3-8b91-d73463d42e27\" (UID: \"2929f3ba-724c-42a3-8b91-d73463d42e27\") " Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.569116 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2929f3ba-724c-42a3-8b91-d73463d42e27-kube-api-access-hktzt" (OuterVolumeSpecName: "kube-api-access-hktzt") pod "2929f3ba-724c-42a3-8b91-d73463d42e27" (UID: "2929f3ba-724c-42a3-8b91-d73463d42e27"). InnerVolumeSpecName "kube-api-access-hktzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.616441 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2929f3ba-724c-42a3-8b91-d73463d42e27-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2929f3ba-724c-42a3-8b91-d73463d42e27" (UID: "2929f3ba-724c-42a3-8b91-d73463d42e27"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.664258 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hktzt\" (UniqueName: \"kubernetes.io/projected/2929f3ba-724c-42a3-8b91-d73463d42e27-kube-api-access-hktzt\") on node \"crc\" DevicePath \"\"" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.664294 4730 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2929f3ba-724c-42a3-8b91-d73463d42e27-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.773420 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xpn9q_must-gather-bm8wj_2929f3ba-724c-42a3-8b91-d73463d42e27/copy/0.log" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.773894 4730 generic.go:334] "Generic (PLEG): container finished" podID="2929f3ba-724c-42a3-8b91-d73463d42e27" containerID="95aa18870d4ed934449bce6133f18764bb45d3ec0b36cf52b23ed21efadaa782" exitCode=143 Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.773946 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xpn9q/must-gather-bm8wj" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.773956 4730 scope.go:117] "RemoveContainer" containerID="95aa18870d4ed934449bce6133f18764bb45d3ec0b36cf52b23ed21efadaa782" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.803540 4730 scope.go:117] "RemoveContainer" containerID="3726560dcea6f612e10c8c4e8200fc58355fa25e04a9405d8e9c7181ac337a9b" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.848890 4730 scope.go:117] "RemoveContainer" containerID="95aa18870d4ed934449bce6133f18764bb45d3ec0b36cf52b23ed21efadaa782" Feb 21 00:32:39 crc kubenswrapper[4730]: E0221 00:32:39.849422 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95aa18870d4ed934449bce6133f18764bb45d3ec0b36cf52b23ed21efadaa782\": container with ID starting with 95aa18870d4ed934449bce6133f18764bb45d3ec0b36cf52b23ed21efadaa782 not found: ID does not exist" containerID="95aa18870d4ed934449bce6133f18764bb45d3ec0b36cf52b23ed21efadaa782" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.849466 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95aa18870d4ed934449bce6133f18764bb45d3ec0b36cf52b23ed21efadaa782"} err="failed to get container status \"95aa18870d4ed934449bce6133f18764bb45d3ec0b36cf52b23ed21efadaa782\": rpc error: code = NotFound desc = could not find container \"95aa18870d4ed934449bce6133f18764bb45d3ec0b36cf52b23ed21efadaa782\": container with ID starting with 95aa18870d4ed934449bce6133f18764bb45d3ec0b36cf52b23ed21efadaa782 not found: ID does not exist" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.849493 4730 scope.go:117] "RemoveContainer" containerID="3726560dcea6f612e10c8c4e8200fc58355fa25e04a9405d8e9c7181ac337a9b" Feb 21 00:32:39 crc kubenswrapper[4730]: E0221 00:32:39.849870 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3726560dcea6f612e10c8c4e8200fc58355fa25e04a9405d8e9c7181ac337a9b\": container with ID starting with 3726560dcea6f612e10c8c4e8200fc58355fa25e04a9405d8e9c7181ac337a9b not found: ID does not exist" containerID="3726560dcea6f612e10c8c4e8200fc58355fa25e04a9405d8e9c7181ac337a9b" Feb 21 00:32:39 crc kubenswrapper[4730]: I0221 00:32:39.849919 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3726560dcea6f612e10c8c4e8200fc58355fa25e04a9405d8e9c7181ac337a9b"} err="failed to get container status \"3726560dcea6f612e10c8c4e8200fc58355fa25e04a9405d8e9c7181ac337a9b\": rpc error: code = NotFound desc = could not find container \"3726560dcea6f612e10c8c4e8200fc58355fa25e04a9405d8e9c7181ac337a9b\": container with ID starting with 3726560dcea6f612e10c8c4e8200fc58355fa25e04a9405d8e9c7181ac337a9b not found: ID does not exist" Feb 21 00:32:40 crc kubenswrapper[4730]: I0221 00:32:40.706480 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2929f3ba-724c-42a3-8b91-d73463d42e27" path="/var/lib/kubelet/pods/2929f3ba-724c-42a3-8b91-d73463d42e27/volumes" Feb 21 00:32:41 crc kubenswrapper[4730]: E0221 00:32:41.694617 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:32:43 crc kubenswrapper[4730]: E0221 00:32:43.695577 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:32:46 crc kubenswrapper[4730]: I0221 00:32:46.693221 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:32:46 crc kubenswrapper[4730]: E0221 00:32:46.693472 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:32:56 crc kubenswrapper[4730]: E0221 00:32:56.695445 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:32:57 crc kubenswrapper[4730]: I0221 00:32:57.694598 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:32:57 crc kubenswrapper[4730]: E0221 00:32:57.695160 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:32:57 crc kubenswrapper[4730]: E0221 00:32:57.696051 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:33:11 crc kubenswrapper[4730]: E0221 00:33:11.694448 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:33:11 crc kubenswrapper[4730]: I0221 00:33:11.932207 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-696dl"] Feb 21 00:33:11 crc kubenswrapper[4730]: E0221 00:33:11.932541 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" containerName="registry-server" Feb 21 00:33:11 crc kubenswrapper[4730]: I0221 00:33:11.932558 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" containerName="registry-server" Feb 21 00:33:11 crc kubenswrapper[4730]: E0221 00:33:11.932581 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2929f3ba-724c-42a3-8b91-d73463d42e27" containerName="gather" Feb 21 00:33:11 crc kubenswrapper[4730]: I0221 00:33:11.932593 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2929f3ba-724c-42a3-8b91-d73463d42e27" containerName="gather" Feb 21 00:33:11 crc kubenswrapper[4730]: E0221 00:33:11.932615 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" containerName="extract-content" Feb 21 00:33:11 crc kubenswrapper[4730]: I0221 00:33:11.932625 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" containerName="extract-content" Feb 21 00:33:11 crc kubenswrapper[4730]: E0221 00:33:11.932640 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" containerName="extract-utilities" Feb 21 00:33:11 crc kubenswrapper[4730]: I0221 00:33:11.932650 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" containerName="extract-utilities" Feb 21 00:33:11 crc kubenswrapper[4730]: E0221 00:33:11.932665 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2929f3ba-724c-42a3-8b91-d73463d42e27" containerName="copy" Feb 21 00:33:11 crc kubenswrapper[4730]: I0221 00:33:11.932674 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2929f3ba-724c-42a3-8b91-d73463d42e27" containerName="copy" Feb 21 00:33:11 crc kubenswrapper[4730]: I0221 00:33:11.932850 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2929f3ba-724c-42a3-8b91-d73463d42e27" containerName="copy" Feb 21 00:33:11 crc kubenswrapper[4730]: I0221 00:33:11.932864 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c405a1a-26f5-4dd7-a2dc-bedd68de7d6c" containerName="registry-server" Feb 21 00:33:11 crc kubenswrapper[4730]: I0221 00:33:11.932877 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2929f3ba-724c-42a3-8b91-d73463d42e27" containerName="gather" Feb 21 00:33:11 crc kubenswrapper[4730]: I0221 00:33:11.934243 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:11 crc kubenswrapper[4730]: I0221 00:33:11.938108 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-696dl"] Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.073871 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0279a22c-e262-42f3-acb9-46219ff74f9a-catalog-content\") pod \"certified-operators-696dl\" (UID: \"0279a22c-e262-42f3-acb9-46219ff74f9a\") " pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.073930 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0279a22c-e262-42f3-acb9-46219ff74f9a-utilities\") pod \"certified-operators-696dl\" (UID: \"0279a22c-e262-42f3-acb9-46219ff74f9a\") " pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.073979 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jncqn\" (UniqueName: \"kubernetes.io/projected/0279a22c-e262-42f3-acb9-46219ff74f9a-kube-api-access-jncqn\") pod \"certified-operators-696dl\" (UID: \"0279a22c-e262-42f3-acb9-46219ff74f9a\") " pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.175719 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0279a22c-e262-42f3-acb9-46219ff74f9a-catalog-content\") pod \"certified-operators-696dl\" (UID: \"0279a22c-e262-42f3-acb9-46219ff74f9a\") " pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.175782 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0279a22c-e262-42f3-acb9-46219ff74f9a-utilities\") pod \"certified-operators-696dl\" (UID: \"0279a22c-e262-42f3-acb9-46219ff74f9a\") " pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.175813 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jncqn\" (UniqueName: \"kubernetes.io/projected/0279a22c-e262-42f3-acb9-46219ff74f9a-kube-api-access-jncqn\") pod \"certified-operators-696dl\" (UID: \"0279a22c-e262-42f3-acb9-46219ff74f9a\") " pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.176196 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0279a22c-e262-42f3-acb9-46219ff74f9a-catalog-content\") pod \"certified-operators-696dl\" (UID: \"0279a22c-e262-42f3-acb9-46219ff74f9a\") " pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.176272 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0279a22c-e262-42f3-acb9-46219ff74f9a-utilities\") pod \"certified-operators-696dl\" (UID: \"0279a22c-e262-42f3-acb9-46219ff74f9a\") " pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.195117 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jncqn\" (UniqueName: \"kubernetes.io/projected/0279a22c-e262-42f3-acb9-46219ff74f9a-kube-api-access-jncqn\") pod \"certified-operators-696dl\" (UID: \"0279a22c-e262-42f3-acb9-46219ff74f9a\") " pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.256484 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.542837 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-696dl"] Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.694871 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:33:12 crc kubenswrapper[4730]: E0221 00:33:12.695346 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:33:12 crc kubenswrapper[4730]: E0221 00:33:12.695993 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.986084 4730 generic.go:334] "Generic (PLEG): container finished" podID="0279a22c-e262-42f3-acb9-46219ff74f9a" containerID="3ce301c0ea903a39184fe4c358bef4733ed3f88bfdd93f29e01a2861d48de97a" exitCode=0 Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.986131 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-696dl" event={"ID":"0279a22c-e262-42f3-acb9-46219ff74f9a","Type":"ContainerDied","Data":"3ce301c0ea903a39184fe4c358bef4733ed3f88bfdd93f29e01a2861d48de97a"} Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.986172 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-696dl" event={"ID":"0279a22c-e262-42f3-acb9-46219ff74f9a","Type":"ContainerStarted","Data":"d2e1be541fa8b22cbf3697543937f66f84f3dc86e8ccb5549adf62cda1aa91dc"} Feb 21 00:33:12 crc kubenswrapper[4730]: I0221 00:33:12.987652 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 00:33:13 crc kubenswrapper[4730]: I0221 00:33:13.992425 4730 generic.go:334] "Generic (PLEG): container finished" podID="0279a22c-e262-42f3-acb9-46219ff74f9a" containerID="d4d8f8a4b167e0199413ed3c27c55882f53085e0e793ddd00d9175c6884dcfa5" exitCode=0 Feb 21 00:33:13 crc kubenswrapper[4730]: I0221 00:33:13.992737 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-696dl" event={"ID":"0279a22c-e262-42f3-acb9-46219ff74f9a","Type":"ContainerDied","Data":"d4d8f8a4b167e0199413ed3c27c55882f53085e0e793ddd00d9175c6884dcfa5"} Feb 21 00:33:15 crc kubenswrapper[4730]: I0221 00:33:15.001369 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-696dl" event={"ID":"0279a22c-e262-42f3-acb9-46219ff74f9a","Type":"ContainerStarted","Data":"d895b17ae552cc792abe3d5d419da43c4cd2dea71eef451a217204488f5961df"} Feb 21 00:33:15 crc kubenswrapper[4730]: I0221 00:33:15.021734 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-696dl" podStartSLOduration=2.65558414 podStartE2EDuration="4.021713045s" podCreationTimestamp="2026-02-21 00:33:11 +0000 UTC" firstStartedPulling="2026-02-21 00:33:12.987325154 +0000 UTC m=+1594.998892089" lastFinishedPulling="2026-02-21 00:33:14.353454059 +0000 UTC m=+1596.365020994" observedRunningTime="2026-02-21 00:33:15.018060437 +0000 UTC m=+1597.029627412" watchObservedRunningTime="2026-02-21 00:33:15.021713045 +0000 UTC m=+1597.033279990" Feb 21 00:33:22 crc kubenswrapper[4730]: I0221 00:33:22.437420 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:22 crc kubenswrapper[4730]: I0221 00:33:22.438754 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:22 crc kubenswrapper[4730]: I0221 00:33:22.455359 4730 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-fbhhr container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 21 00:33:22 crc kubenswrapper[4730]: I0221 00:33:22.455419 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" podUID="780f4657-32ba-4755-b1ca-76fbb94ed7b8" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 00:33:22 crc kubenswrapper[4730]: I0221 00:33:22.455805 4730 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-g4j9p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 21 00:33:22 crc kubenswrapper[4730]: I0221 00:33:22.455832 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" podUID="495e727f-2c66-441e-ae90-7e3dcf5e79ce" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 21 00:33:22 crc kubenswrapper[4730]: I0221 00:33:22.456015 4730 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-g4j9p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 21 00:33:22 crc kubenswrapper[4730]: I0221 00:33:22.456039 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g4j9p" podUID="495e727f-2c66-441e-ae90-7e3dcf5e79ce" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 21 00:33:22 crc kubenswrapper[4730]: I0221 00:33:22.456199 4730 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-fbhhr container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 21 00:33:22 crc kubenswrapper[4730]: I0221 00:33:22.456220 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbhhr" podUID="780f4657-32ba-4755-b1ca-76fbb94ed7b8" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 00:33:22 crc kubenswrapper[4730]: I0221 00:33:22.493738 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:22 crc kubenswrapper[4730]: E0221 00:33:22.695133 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:33:23 crc kubenswrapper[4730]: I0221 00:33:23.487086 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:23 crc kubenswrapper[4730]: I0221 00:33:23.656521 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-696dl"] Feb 21 00:33:23 crc kubenswrapper[4730]: I0221 00:33:23.693894 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:33:23 crc kubenswrapper[4730]: E0221 00:33:23.694594 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:33:25 crc kubenswrapper[4730]: I0221 00:33:25.463708 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-696dl" podUID="0279a22c-e262-42f3-acb9-46219ff74f9a" containerName="registry-server" containerID="cri-o://d895b17ae552cc792abe3d5d419da43c4cd2dea71eef451a217204488f5961df" gracePeriod=2 Feb 21 00:33:25 crc kubenswrapper[4730]: I0221 00:33:25.846286 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:25 crc kubenswrapper[4730]: I0221 00:33:25.901801 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0279a22c-e262-42f3-acb9-46219ff74f9a-catalog-content\") pod \"0279a22c-e262-42f3-acb9-46219ff74f9a\" (UID: \"0279a22c-e262-42f3-acb9-46219ff74f9a\") " Feb 21 00:33:25 crc kubenswrapper[4730]: I0221 00:33:25.902070 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0279a22c-e262-42f3-acb9-46219ff74f9a-utilities\") pod \"0279a22c-e262-42f3-acb9-46219ff74f9a\" (UID: \"0279a22c-e262-42f3-acb9-46219ff74f9a\") " Feb 21 00:33:25 crc kubenswrapper[4730]: I0221 00:33:25.902153 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jncqn\" (UniqueName: \"kubernetes.io/projected/0279a22c-e262-42f3-acb9-46219ff74f9a-kube-api-access-jncqn\") pod \"0279a22c-e262-42f3-acb9-46219ff74f9a\" (UID: \"0279a22c-e262-42f3-acb9-46219ff74f9a\") " Feb 21 00:33:25 crc kubenswrapper[4730]: I0221 00:33:25.912514 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0279a22c-e262-42f3-acb9-46219ff74f9a-utilities" (OuterVolumeSpecName: "utilities") pod "0279a22c-e262-42f3-acb9-46219ff74f9a" (UID: "0279a22c-e262-42f3-acb9-46219ff74f9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:33:25 crc kubenswrapper[4730]: I0221 00:33:25.914272 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0279a22c-e262-42f3-acb9-46219ff74f9a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 00:33:25 crc kubenswrapper[4730]: I0221 00:33:25.919312 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0279a22c-e262-42f3-acb9-46219ff74f9a-kube-api-access-jncqn" (OuterVolumeSpecName: "kube-api-access-jncqn") pod "0279a22c-e262-42f3-acb9-46219ff74f9a" (UID: "0279a22c-e262-42f3-acb9-46219ff74f9a"). InnerVolumeSpecName "kube-api-access-jncqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 00:33:25 crc kubenswrapper[4730]: I0221 00:33:25.994971 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0279a22c-e262-42f3-acb9-46219ff74f9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0279a22c-e262-42f3-acb9-46219ff74f9a" (UID: "0279a22c-e262-42f3-acb9-46219ff74f9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.015429 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jncqn\" (UniqueName: \"kubernetes.io/projected/0279a22c-e262-42f3-acb9-46219ff74f9a-kube-api-access-jncqn\") on node \"crc\" DevicePath \"\"" Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.015454 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0279a22c-e262-42f3-acb9-46219ff74f9a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.480488 4730 generic.go:334] "Generic (PLEG): container finished" podID="0279a22c-e262-42f3-acb9-46219ff74f9a" containerID="d895b17ae552cc792abe3d5d419da43c4cd2dea71eef451a217204488f5961df" exitCode=0 Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.480542 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-696dl" Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.480560 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-696dl" event={"ID":"0279a22c-e262-42f3-acb9-46219ff74f9a","Type":"ContainerDied","Data":"d895b17ae552cc792abe3d5d419da43c4cd2dea71eef451a217204488f5961df"} Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.480929 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-696dl" event={"ID":"0279a22c-e262-42f3-acb9-46219ff74f9a","Type":"ContainerDied","Data":"d2e1be541fa8b22cbf3697543937f66f84f3dc86e8ccb5549adf62cda1aa91dc"} Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.480964 4730 scope.go:117] "RemoveContainer" containerID="d895b17ae552cc792abe3d5d419da43c4cd2dea71eef451a217204488f5961df" Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.525682 4730 scope.go:117] "RemoveContainer" containerID="d4d8f8a4b167e0199413ed3c27c55882f53085e0e793ddd00d9175c6884dcfa5" Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.532928 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-696dl"] Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.542541 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-696dl"] Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.562884 4730 scope.go:117] "RemoveContainer" containerID="3ce301c0ea903a39184fe4c358bef4733ed3f88bfdd93f29e01a2861d48de97a" Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.586789 4730 scope.go:117] "RemoveContainer" containerID="d895b17ae552cc792abe3d5d419da43c4cd2dea71eef451a217204488f5961df" Feb 21 00:33:26 crc kubenswrapper[4730]: E0221 00:33:26.587319 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d895b17ae552cc792abe3d5d419da43c4cd2dea71eef451a217204488f5961df\": container with ID starting with d895b17ae552cc792abe3d5d419da43c4cd2dea71eef451a217204488f5961df not found: ID does not exist" containerID="d895b17ae552cc792abe3d5d419da43c4cd2dea71eef451a217204488f5961df" Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.587402 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d895b17ae552cc792abe3d5d419da43c4cd2dea71eef451a217204488f5961df"} err="failed to get container status \"d895b17ae552cc792abe3d5d419da43c4cd2dea71eef451a217204488f5961df\": rpc error: code = NotFound desc = could not find container \"d895b17ae552cc792abe3d5d419da43c4cd2dea71eef451a217204488f5961df\": container with ID starting with d895b17ae552cc792abe3d5d419da43c4cd2dea71eef451a217204488f5961df not found: ID does not exist" Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.587455 4730 scope.go:117] "RemoveContainer" containerID="d4d8f8a4b167e0199413ed3c27c55882f53085e0e793ddd00d9175c6884dcfa5" Feb 21 00:33:26 crc kubenswrapper[4730]: E0221 00:33:26.587917 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4d8f8a4b167e0199413ed3c27c55882f53085e0e793ddd00d9175c6884dcfa5\": container with ID starting with d4d8f8a4b167e0199413ed3c27c55882f53085e0e793ddd00d9175c6884dcfa5 not found: ID does not exist" containerID="d4d8f8a4b167e0199413ed3c27c55882f53085e0e793ddd00d9175c6884dcfa5" Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.588019 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4d8f8a4b167e0199413ed3c27c55882f53085e0e793ddd00d9175c6884dcfa5"} err="failed to get container status \"d4d8f8a4b167e0199413ed3c27c55882f53085e0e793ddd00d9175c6884dcfa5\": rpc error: code = NotFound desc = could not find container \"d4d8f8a4b167e0199413ed3c27c55882f53085e0e793ddd00d9175c6884dcfa5\": container with ID starting with d4d8f8a4b167e0199413ed3c27c55882f53085e0e793ddd00d9175c6884dcfa5 not found: ID does not exist" Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.588060 4730 scope.go:117] "RemoveContainer" containerID="3ce301c0ea903a39184fe4c358bef4733ed3f88bfdd93f29e01a2861d48de97a" Feb 21 00:33:26 crc kubenswrapper[4730]: E0221 00:33:26.588575 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ce301c0ea903a39184fe4c358bef4733ed3f88bfdd93f29e01a2861d48de97a\": container with ID starting with 3ce301c0ea903a39184fe4c358bef4733ed3f88bfdd93f29e01a2861d48de97a not found: ID does not exist" containerID="3ce301c0ea903a39184fe4c358bef4733ed3f88bfdd93f29e01a2861d48de97a" Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.588627 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce301c0ea903a39184fe4c358bef4733ed3f88bfdd93f29e01a2861d48de97a"} err="failed to get container status \"3ce301c0ea903a39184fe4c358bef4733ed3f88bfdd93f29e01a2861d48de97a\": rpc error: code = NotFound desc = could not find container \"3ce301c0ea903a39184fe4c358bef4733ed3f88bfdd93f29e01a2861d48de97a\": container with ID starting with 3ce301c0ea903a39184fe4c358bef4733ed3f88bfdd93f29e01a2861d48de97a not found: ID does not exist" Feb 21 00:33:26 crc kubenswrapper[4730]: I0221 00:33:26.698615 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0279a22c-e262-42f3-acb9-46219ff74f9a" path="/var/lib/kubelet/pods/0279a22c-e262-42f3-acb9-46219ff74f9a/volumes" Feb 21 00:33:27 crc kubenswrapper[4730]: E0221 00:33:27.695210 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:33:33 crc kubenswrapper[4730]: E0221 00:33:33.695190 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:33:34 crc kubenswrapper[4730]: I0221 00:33:34.693273 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:33:34 crc kubenswrapper[4730]: E0221 00:33:34.693550 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:33:40 crc kubenswrapper[4730]: E0221 00:33:40.695855 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:33:47 crc kubenswrapper[4730]: E0221 00:33:47.695136 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:33:48 crc kubenswrapper[4730]: I0221 00:33:48.696983 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:33:48 crc kubenswrapper[4730]: E0221 00:33:48.697322 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:33:53 crc kubenswrapper[4730]: E0221 00:33:53.694995 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:33:58 crc kubenswrapper[4730]: E0221 00:33:58.702719 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:34:03 crc kubenswrapper[4730]: I0221 00:34:03.693339 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:34:03 crc kubenswrapper[4730]: E0221 00:34:03.694089 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:34:04 crc kubenswrapper[4730]: E0221 00:34:04.696880 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:34:09 crc kubenswrapper[4730]: E0221 00:34:09.695496 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:34:15 crc kubenswrapper[4730]: I0221 00:34:15.695118 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:34:15 crc kubenswrapper[4730]: E0221 00:34:15.696420 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:34:15 crc kubenswrapper[4730]: E0221 00:34:15.697363 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:34:23 crc kubenswrapper[4730]: E0221 00:34:23.695514 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:34:26 crc kubenswrapper[4730]: E0221 00:34:26.694764 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:34:30 crc kubenswrapper[4730]: I0221 00:34:30.693646 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:34:30 crc kubenswrapper[4730]: E0221 00:34:30.694352 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:34:36 crc kubenswrapper[4730]: E0221 00:34:36.696268 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:34:37 crc kubenswrapper[4730]: E0221 00:34:37.694884 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:34:45 crc kubenswrapper[4730]: I0221 00:34:45.694661 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:34:45 crc kubenswrapper[4730]: E0221 00:34:45.696117 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:34:48 crc kubenswrapper[4730]: E0221 00:34:48.695569 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:34:48 crc kubenswrapper[4730]: E0221 00:34:48.705927 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:34:57 crc kubenswrapper[4730]: I0221 00:34:57.693124 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:34:57 crc kubenswrapper[4730]: E0221 00:34:57.693933 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:35:01 crc kubenswrapper[4730]: E0221 00:35:01.696933 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:35:02 crc kubenswrapper[4730]: E0221 00:35:02.742780 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:35:02 crc kubenswrapper[4730]: E0221 00:35:02.743070 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jrd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wffsm_service-telemetry(8e54debe-02a8-4a6a-a22a-9403776c881f): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:35:02 crc kubenswrapper[4730]: E0221 00:35:02.745055 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:35:10 crc kubenswrapper[4730]: I0221 00:35:10.693831 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:35:10 crc kubenswrapper[4730]: E0221 00:35:10.694769 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:35:15 crc kubenswrapper[4730]: E0221 00:35:15.695530 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:35:16 crc kubenswrapper[4730]: E0221 00:35:16.777296 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:35:21 crc kubenswrapper[4730]: I0221 00:35:21.693433 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:35:21 crc kubenswrapper[4730]: E0221 00:35:21.694361 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:35:27 crc kubenswrapper[4730]: E0221 00:35:27.694743 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:35:27 crc kubenswrapper[4730]: E0221 00:35:27.694782 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:35:36 crc kubenswrapper[4730]: I0221 00:35:36.693578 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:35:36 crc kubenswrapper[4730]: E0221 00:35:36.694433 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:35:38 crc kubenswrapper[4730]: E0221 00:35:38.706791 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:35:42 crc kubenswrapper[4730]: E0221 00:35:42.740822 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 21 00:35:42 crc kubenswrapper[4730]: E0221 00:35:42.742635 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kc7xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-l4hk5_service-telemetry(60b7c706-1a74-4336-ad31-890d9228ae69): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 21 00:35:42 crc kubenswrapper[4730]: E0221 00:35:42.743875 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:35:49 crc kubenswrapper[4730]: E0221 00:35:49.696654 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:35:50 crc kubenswrapper[4730]: I0221 00:35:50.692924 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:35:50 crc kubenswrapper[4730]: E0221 00:35:50.693672 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:35:56 crc kubenswrapper[4730]: E0221 00:35:56.695398 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:36:01 crc kubenswrapper[4730]: I0221 00:36:01.693072 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:36:01 crc kubenswrapper[4730]: E0221 00:36:01.694811 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:36:04 crc kubenswrapper[4730]: E0221 00:36:04.698036 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:36:08 crc kubenswrapper[4730]: E0221 00:36:08.700441 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:36:12 crc kubenswrapper[4730]: I0221 00:36:12.693000 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:36:12 crc kubenswrapper[4730]: E0221 00:36:12.693736 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plgd8_openshift-machine-config-operator(7622a560-9120-4202-b95a-246a806fe889)\"" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" podUID="7622a560-9120-4202-b95a-246a806fe889" Feb 21 00:36:18 crc kubenswrapper[4730]: E0221 00:36:18.701152 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:36:20 crc kubenswrapper[4730]: E0221 00:36:20.696502 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:36:26 crc kubenswrapper[4730]: I0221 00:36:26.693453 4730 scope.go:117] "RemoveContainer" containerID="ad86989cc274c8aa45eda7de9ef82797a606342c39f88add469ac643a12b87a2" Feb 21 00:36:27 crc kubenswrapper[4730]: I0221 00:36:27.795987 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plgd8" event={"ID":"7622a560-9120-4202-b95a-246a806fe889","Type":"ContainerStarted","Data":"fbb4d887cb913db04bceb38cc146063f5440250ea499f288a203e1e177fce922"} Feb 21 00:36:32 crc kubenswrapper[4730]: E0221 00:36:32.694440 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:36:33 crc kubenswrapper[4730]: E0221 00:36:33.699657 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:36:43 crc kubenswrapper[4730]: E0221 00:36:43.695599 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:36:45 crc kubenswrapper[4730]: E0221 00:36:45.696137 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.376774 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m64x4"] Feb 21 00:36:54 crc kubenswrapper[4730]: E0221 00:36:54.377940 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0279a22c-e262-42f3-acb9-46219ff74f9a" containerName="registry-server" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.377992 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="0279a22c-e262-42f3-acb9-46219ff74f9a" containerName="registry-server" Feb 21 00:36:54 crc kubenswrapper[4730]: E0221 00:36:54.378017 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0279a22c-e262-42f3-acb9-46219ff74f9a" containerName="extract-content" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.378030 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="0279a22c-e262-42f3-acb9-46219ff74f9a" containerName="extract-content" Feb 21 00:36:54 crc kubenswrapper[4730]: E0221 00:36:54.378054 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0279a22c-e262-42f3-acb9-46219ff74f9a" containerName="extract-utilities" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.378068 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="0279a22c-e262-42f3-acb9-46219ff74f9a" containerName="extract-utilities" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.378263 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="0279a22c-e262-42f3-acb9-46219ff74f9a" containerName="registry-server" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.379686 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m64x4" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.426253 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m64x4"] Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.509742 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/698807c0-dcbd-46cd-a38c-86bd0c33a3ee-catalog-content\") pod \"redhat-operators-m64x4\" (UID: \"698807c0-dcbd-46cd-a38c-86bd0c33a3ee\") " pod="openshift-marketplace/redhat-operators-m64x4" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.509831 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/698807c0-dcbd-46cd-a38c-86bd0c33a3ee-utilities\") pod \"redhat-operators-m64x4\" (UID: \"698807c0-dcbd-46cd-a38c-86bd0c33a3ee\") " pod="openshift-marketplace/redhat-operators-m64x4" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.510011 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnbsz\" (UniqueName: \"kubernetes.io/projected/698807c0-dcbd-46cd-a38c-86bd0c33a3ee-kube-api-access-bnbsz\") pod \"redhat-operators-m64x4\" (UID: \"698807c0-dcbd-46cd-a38c-86bd0c33a3ee\") " pod="openshift-marketplace/redhat-operators-m64x4" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.611598 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnbsz\" (UniqueName: \"kubernetes.io/projected/698807c0-dcbd-46cd-a38c-86bd0c33a3ee-kube-api-access-bnbsz\") pod \"redhat-operators-m64x4\" (UID: \"698807c0-dcbd-46cd-a38c-86bd0c33a3ee\") " pod="openshift-marketplace/redhat-operators-m64x4" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.611663 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/698807c0-dcbd-46cd-a38c-86bd0c33a3ee-catalog-content\") pod \"redhat-operators-m64x4\" (UID: \"698807c0-dcbd-46cd-a38c-86bd0c33a3ee\") " pod="openshift-marketplace/redhat-operators-m64x4" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.611704 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/698807c0-dcbd-46cd-a38c-86bd0c33a3ee-utilities\") pod \"redhat-operators-m64x4\" (UID: \"698807c0-dcbd-46cd-a38c-86bd0c33a3ee\") " pod="openshift-marketplace/redhat-operators-m64x4" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.612259 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/698807c0-dcbd-46cd-a38c-86bd0c33a3ee-catalog-content\") pod \"redhat-operators-m64x4\" (UID: \"698807c0-dcbd-46cd-a38c-86bd0c33a3ee\") " pod="openshift-marketplace/redhat-operators-m64x4" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.612308 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/698807c0-dcbd-46cd-a38c-86bd0c33a3ee-utilities\") pod \"redhat-operators-m64x4\" (UID: \"698807c0-dcbd-46cd-a38c-86bd0c33a3ee\") " pod="openshift-marketplace/redhat-operators-m64x4" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.631483 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnbsz\" (UniqueName: \"kubernetes.io/projected/698807c0-dcbd-46cd-a38c-86bd0c33a3ee-kube-api-access-bnbsz\") pod \"redhat-operators-m64x4\" (UID: \"698807c0-dcbd-46cd-a38c-86bd0c33a3ee\") " pod="openshift-marketplace/redhat-operators-m64x4" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.742912 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m64x4" Feb 21 00:36:54 crc kubenswrapper[4730]: I0221 00:36:54.959933 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m64x4"] Feb 21 00:36:55 crc kubenswrapper[4730]: I0221 00:36:55.000701 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m64x4" event={"ID":"698807c0-dcbd-46cd-a38c-86bd0c33a3ee","Type":"ContainerStarted","Data":"95e6791bbb483895ffd90060e941fab4a752ca85a13e1423f2441938385d1f87"} Feb 21 00:36:56 crc kubenswrapper[4730]: I0221 00:36:56.009047 4730 generic.go:334] "Generic (PLEG): container finished" podID="698807c0-dcbd-46cd-a38c-86bd0c33a3ee" containerID="1c095c91f34b49fb9defc57aec8a0d23e0dd9e50780ecd7fd77a8a1d80958e38" exitCode=0 Feb 21 00:36:56 crc kubenswrapper[4730]: I0221 00:36:56.009169 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m64x4" event={"ID":"698807c0-dcbd-46cd-a38c-86bd0c33a3ee","Type":"ContainerDied","Data":"1c095c91f34b49fb9defc57aec8a0d23e0dd9e50780ecd7fd77a8a1d80958e38"} Feb 21 00:36:56 crc kubenswrapper[4730]: E0221 00:36:56.697720 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-wffsm" podUID="8e54debe-02a8-4a6a-a22a-9403776c881f" Feb 21 00:36:57 crc kubenswrapper[4730]: I0221 00:36:57.020831 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m64x4" event={"ID":"698807c0-dcbd-46cd-a38c-86bd0c33a3ee","Type":"ContainerStarted","Data":"d34e0ff21b5646b10dcb5696ec512c67440cde152743a6b55cf328063cd29f8f"} Feb 21 00:36:58 crc kubenswrapper[4730]: I0221 00:36:58.043263 4730 generic.go:334] "Generic (PLEG): container finished" podID="698807c0-dcbd-46cd-a38c-86bd0c33a3ee" containerID="d34e0ff21b5646b10dcb5696ec512c67440cde152743a6b55cf328063cd29f8f" exitCode=0 Feb 21 00:36:58 crc kubenswrapper[4730]: I0221 00:36:58.043483 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m64x4" event={"ID":"698807c0-dcbd-46cd-a38c-86bd0c33a3ee","Type":"ContainerDied","Data":"d34e0ff21b5646b10dcb5696ec512c67440cde152743a6b55cf328063cd29f8f"} Feb 21 00:36:59 crc kubenswrapper[4730]: I0221 00:36:59.054188 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m64x4" event={"ID":"698807c0-dcbd-46cd-a38c-86bd0c33a3ee","Type":"ContainerStarted","Data":"91efa5dfe25f126eaebbaeb8b93b65f52d9215052fe0ca2feba7e9496218c9f3"} Feb 21 00:36:59 crc kubenswrapper[4730]: I0221 00:36:59.079396 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m64x4" podStartSLOduration=2.386003142 podStartE2EDuration="5.079375818s" podCreationTimestamp="2026-02-21 00:36:54 +0000 UTC" firstStartedPulling="2026-02-21 00:36:56.010566362 +0000 UTC m=+1818.022133297" lastFinishedPulling="2026-02-21 00:36:58.703939028 +0000 UTC m=+1820.715505973" observedRunningTime="2026-02-21 00:36:59.078132848 +0000 UTC m=+1821.089699813" watchObservedRunningTime="2026-02-21 00:36:59.079375818 +0000 UTC m=+1821.090942773" Feb 21 00:36:59 crc kubenswrapper[4730]: E0221 00:36:59.699080 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-l4hk5" podUID="60b7c706-1a74-4336-ad31-890d9228ae69" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515146177065024461 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015146177066017377 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015146173050016507 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015146173050015457 5ustar corecore